Planet OpenNMS

July 28, 2016

Adventures in Open Source

2016 Dev-Jam: Day 3

It’s hard to believe this year’s Dev-Jam is half over. After months of planning it seems to go by so fast.

One of the goals I had this week was to understand more about the OpenNMS Documentation Project. For years I’ve been saying that OpenNMS documentation sucks like most open source projects, but I can’t say that any more. It has actually become quite mature. There is a detailed installation guide, a users guide, and administrators guide and a guide for developers. Each release the docs are compiled right alongside the code, and it even rates its own section on the new website.

Web Site Docs Page

It’s written in AsciiDoc, and all of the documentation is version controlled and kept in git.

Ronny Trommer is one of the leads on the documentation project, and I asked him to spend some time with me to explain how everything is organized.

Ronny Trommer

Of the four main guides, the installation guide is almost complete. Everything else is constantly improving, with the user guide aimed at people working through the GUI and the administration guide is more focused on configuration. For example, the discussion of the path outage feature is in the users guide but how to turn it on is in the admin guide.

There is even something for everyone in the developers guide (I am the first to state I am not a developer). One section details the style rules for documentation, in great detail. For example, in order to manage changes, each sentence should be on a single line. That way a small change to, say, a misspelled word, doesn’t cause a huge diff. Also, we are limited as to the types of images we can display, so people are encouraged to upload the raw “source” image as well as an exported one to save time in the future should someone want to edit it.

It is really well done and now I’m eager to start contributing.

Speaking of well done, Jonathan has figured out what is keeping OpenNMS from using the latest version of OTRS (and he’s sent a patch over to them) and Jesse showed me some amazing work he’s done on the Minion code.

We’ve been struggling to figure out how to implement the Minion code since we want to be able to run it on tiny machines like the Raspberry Pi, but since OpenNMS is written in Java there is a lot of overhead to using that language on these smaller systems. He re-wrote it in Go and then uploaded it to a device on my home network. At only 5.6MB it’s tiny, and yet it was able to do discovery as well as data collection (including NRTG). Sheer awesomeness.

Wednesday was also Twins night.

Twins Tickets

For several years now we’ve been going as a group to see the Minnesota Twins baseball team play at Target Field. It’s a lot of fun, although this year the Germans decided that they’d had enough of baseball and spent the time wandering around downtown Minneapolis.

At first I thought they had the right idea, as the Braves went up 4 to 0 in the first and by the top of the fourth were leading 7 to 0. However, the Twins rallied and made it interesting, although they did end up losing 9 to 7.

Our seats were out in left field, ‘natch.

Twins Tickets

by Tarus at July 28, 2016 02:00 PM

July 27, 2016

Adventures in Open Source

2016 Dev-Jam: Day 2

By Day Two people have settled into a rhythm. Get up, eat breakfast, start hacking on OpenNMS. I tend to start my day with these blog posts.

It’s nice to have most of the team together. Remember, OpenNMS is over 15 years old so there is a lot of different technology in the monitoring platform. I think David counted 18 different libraries and tools in the GUI alone, so there was a meeting held to discuss cleaning that up and settling on a much smaller set moving forward.

In any case ReST will play a huge role. OpenNMS Compass is built entirely on ReST, and so the next generation GUI will do the same. It makes integrating with OpenNMS simple, as Antonio demonstrated in a provisioning dashboard he wrote for one of his customers in Italy.

Antonio Teaching

They needed an easier way to manage their ten thousand plus devices, so he was able to use the ReST interface to build out exactly what they wanted. And of course the source is open.

Several years ago we started a tradition of having a local restaurant, Brasa, cater dinner one night. This year it was Tuesday, and it is always the best meal of the week.

Antonio Teaching

As we were getting ready to eat, Alex Hoogerhuis, a big supporter of OpenNMS who lives in Norway, decided to join us via our Double Robotics robot, Ulfbot. It worked flawlessly, and he was the best first time driver we’ve had. Ben, Jeff and Jonathan joined him for a picture.

Alex and Team

We like using the Yudof Hall Club Room for Dev-Jam for a number of reasons, one includes the big patio overlooking the river with picnic tables. Alex was able to drive around and spend some time with the rest of the team, although we had to lift him up to see over the wall to the Mississippi (we also had to carry him in when the wind picked up – heh).

Alex at Dinner

After dinner people kept working (DJ was up until nearly 2am chasing a bug) but we also took a break to watch Deadpool. It’s why “Dev-Jam” rhymes with “fun”.

by Tarus at July 27, 2016 02:49 PM

July 26, 2016

Adventures in Open Source

Review: X-Arcade Gaming Cabinet

Last year I wanted to do something special for the team to commemorate the release of OpenNMS Meridian.

Since all the cool kids in Silicon Valley have access to a classic arcade machine, I decided to buy one for the office. I did a lot of research and I settled on one from X-Arcade.

X-Arcade Machine

The main reasons were that it looked well-made but it also included all of my favorites, including Pac-Man, Galaga and Tempest.

X-Arcade Games

The final piece that sold me on it was the ability to add your own graphics. I went to Jessica, our Graphic Designer, and she put together this wonderful graphic done in the classic eight-bit “platformer” style and featuring all the employees.

X-Arcade Graphic

Ulf took the role of Donkey Kong, and here is the picture meant to represent me:

X-Arcade Tarus

The “Tank Stick” controls are solid and responsive, although I did end up adding a spinner since none of the controls really worked for Tempest.

When you order one of these things, they stress that you need to make sure it arrives safely. Seriously, like four times, in big bold letters, they state you should check the machine on delivery.

I was going to be out of town when it arrived, so I made sure to tell the person checking in the delivery to make sure it was okay (i.e. take it out of the box).

They didn’t (the box looked “fine”) and so we ended up with this:

X-Arcade Cracked Top

(sigh)

Outside of that, everything arrived in working order. You get a small Dell desktop running Windows with the software pre-installed, but you also get CDs with all the games that are included with the system. It’s a little bit of a pain to set up since the instructions are a little vague, but after about an hour or so I had it up and running.

Anyway, it is real fun to play. It supports MAME games, Sega games, Atari 2600 games and even that short lived laserdisc franchise “Dragon’s Lair”. You can copy other games to the system if you have them, although scrolling through the menu can get a bit tiring if you have a long list of titles.

We had an issue with the CRT about 11 months after buying the system. I came back from a business trip to find the thing dark (it never goes dark, if the computer is hung for some reason you’ll still see a “no signal” graphic on the monitor”). Turns out the CRT had died, but they sent us a replacement under warranty and hassle free. It took about an hour to replace (those instructions were pretty detailed) and it worked better than ever afterward.

This motivated me to consider fixing the top. When we had the system apart to replace the monitor, I noticed that the top was a) the only thing broken and b) held on with eight screws. I contacted them about a replacement piece and to my surprise it arrived two days later – no charge.

The only issue I have remaining with the system is the fact that it is Windows-based. This seems to be the perfect application for a small solid-state Linux box, but I haven’t had the time to investigate a migration. Instead I just turned off or removed as much software as I could (all the Dell Update stuff kept popping up in the middle of playing a game) and so far so good.

I am very happy with the product and extremely happy with the company behind it. If you are in the market for such a cabinet, please check them out.

by Tarus at July 26, 2016 10:07 PM

2016 Dev-Jam: Day 1

Dev-Jam officially started on Monday at 10am, where I did my usual kick-off speech before turning it over to Seth and Jesse who handle the technical side of things.

Yesterday I stated that this was our tenth Dev-Jam at UMN. I forgot that the first one was held at my house, so this actually makes this the ninth (we’ve still had eleven since 2005).

Yudof Club Room

Everyone went around the room and talked about the things they wanted to work on this week. A lot of them focused on Minion, a technology rather unique to OpenNMS. A Minion is a a Karaf container that implements features for remote monitoring. It is key for OpenNMS to be able to scale to the Internet of Things (IoT) level of millions of devices and billions of metrics. And speaking of IoT, Ken turned me on to openHAB which is something I need to check out.

Yudof Kitchen

It is often hard for me to describe Dev-Jam to other people, as it is truly a lightly structured “un-conference”. In a great example of the Open Source Way it is very self-organizing, and I look forward to Friday when everyone presents what they have done.

Some of the Germans

We did have Alex Finger, one of the creators of the OpenNMS Foundation, join use via robot. He was having some sound issues and I think he did get stymied by the robot’s lack of hands when he came across a door, but it was cool he was able to visit from Europe.

Alex on the Robot

We use this week for planning and sharing, so Jesse took some time to go over the Business Service Monitor (BSM) which allows you to create a “business level” view of your services versus just the devices themselves. It is fully implemented via ReST and it pretty powerful, although as with a lot of things OpenNMS that very power can add complexity. I’m hoping our community will find great uses for it.

jesse and BSM

That evening about half of us walked to a theatre to see Star Trek Beyond. Most of us disliked it and I posted a negative review, but it was fun to go out with my friends.

by Tarus at July 26, 2016 03:15 PM

July 25, 2016

This Week in OpenNMS

This Week in OpenNMS: July 25th, 2016

In the last week we worked on minion, topology maps and BSM, and release preparation and bug fixes.

Github Project Updates

  • Minion

    Pradeep and Malatesh's minion trapd kafka and blueprint work was wrapped up. Jesse and Chandra did more work on making the detectors run in the minion. Seth fixed Dis…

July 25, 2016 04:57 PM

Adventures in Open Source

2016 Dev-Jam: Day 0

♬ It’s the most wonderful time of the year ♬

Ah yes, it’s Dev-Jam time, where we descend onto the campus of the University of Minnesota, Twin Cities, for a week of OpenNMS goodness.

This is our eleventh annual Dev-Jam and our tenth at UMN. They are really good hosts so we’ve found it hard to look elsewhere for a place to hold the conference.

This is not a user’s conference. That is coming up in September. Instead, this is a chance for the core contributors of OpenNMS, and those people who’d like to become core contributors, to get together, share and determine the direction of OpenNMS for another year.

This year we are just shy of 30 people from four different countries: The US, the UK, Italy and India. Alejandro and his wife Carolina are now permanent residents of the US so I can’t really count them as being from Venezuela any more, and that happened directly through his involvement with Dev Jam. We’ve had more people but 30 seems to be the magic number (one year we had 40 and it was much harder to manage)

MSP sign at airport

My trip to MSP was uneventful. I flew through Dallas even though there is a direct RDU->MSP flight on Delta since I’m extremely close to Lifetime Platinum status on American Airlines. Also, AA has added a cool feature on their mobile app that lets me track my bags. This was important since I was shipping a box of four 12-packs of Cheerwine – a Dev-Jam favorite and as always a target for TSA inspection (apparently a 40+ pound box of soda is suspicious). Everything got here fine.

Including Ulf:

Ulf in Admiral's Club

Ulf is the OpenNMS mascot and he, too, is a product of Dev-Jam. Many years ago Craig Miskell came to Dev-Jam from New Zealand. He brought this plush toy and gave it to the Germans, who named him “Ulf”. Since then he has been around the world spreading the Good News about OpenNMS, so it wouldn’t be Dev-Jam without him.

We stay in a dorm called Yudof Hall where we take over the Club Room, a large room on the ground floor that includes a kitchen and an area with sofas and a television. In the middle we set up tables where we work, and due to UMN being a top-tier university we have great bandwidth. There is a huge brick patio next to it that looks out over the Mississippi River. It’s a very nice place to spend the week.

Speaking of the Mississippi, we crossed it last night to our usual kick-off spot, the Town Hall Brewery. As a cocktail aficionado, I was happy to see some craft cocktails on the menu, and a number of us tried the “Hallbach”, their take on the Seelbach Cocktail:

Hallbach Cocktail

It was very nice, as they used a high proof bourbon and replaced the champagne with sparkling cider.

We like Town Hall since we can seat 30 people. We do cater in as well as go out. The new light rail service to campus makes getting around easy, especially to the Mall of America and Target Field.

Speaking of baseball, we’re all going to the game on Wednesday. If you are in the area and want to join us, I should have a couple of tickets available. Just drop me a note. We also brought along the Ulfbot, which is a tele-presence robot so do the note dropping thing if you want to “visit”.

Dev-Jam!

by Tarus at July 25, 2016 02:25 PM

July 21, 2016

OpenNMS Releases

OpenNMS Horizon 18.0.1

OpenNMS 18.0.1 (code name: Platypus) is now available.

While it contains a large number of bug fixes and a few enhancements, the most noticeable change is a fix for the distributed and geographical maps. We had been using MapQuest's (wonderful and freely-available) OpenStreetMap-compatible tile ser…

July 21, 2016 08:05 PM

OpenNMS.org Blog

OUCE 2016 - Speaker and Talks

OUCE 2016 - Speaker and Talks

OpenNMS User Conference: Europe (OUCE) is a series of talks about monitoring and network management with OpenNMS. Our conference creates a time and place for the OpenNMS community to share information, discuss ideas, and work together to improve monitoring with the free…

July 21, 2016 06:07 PM

July 20, 2016

Adventures in Open Source

New Fancy Website for www.opennms.org

As some of you may have noticed, a little while ago the OpenNMS Project website got updated to a new, fancy, responsive version.

OpenNMS Platform

This was mainly the work of Ronny Trommer with a big assist from our graphic designer, Jessica.

We are often so busy working on the code we often forget how important it is to tell people about what we are doing. Most people who take the time to learn about the project realize how awesome it is, but it can be hard to get over that first hump in the learning curve.

I hope that the new site will both reflect the benefits of using OpenNMS as well as the work of the community behind it.

by Tarus at July 20, 2016 11:39 AM

July 19, 2016

Adventures in Open Source

OpenNMS Meridian 2016 Released

I am woefully behind on blog posts, so please forgive the latency in posting about Meridian 2016.

As you know, early last year we split OpenNMS into two flavors: Horizon and Meridian. The goal was to create a faster release cycle for OpenNMS while still providing a stable and supportable version for those who didn’t need the latest features.

This has worked out extremely well. While there used to be eighteen months or so between major releases, we did five versions of Horizon in the same amount of time. That has led to the rapid development of such features as the Newts integration and the Business Service Monitor (BSM).

But that doesn’t mean the features in Horizon are perfect on Day One. For example, one early adopter of the Newts integration in Horizon 17 helped us find a couple of major performance issues that were corrected by the time Meridian 2016 came out.

The Meridian line is supported for three years. So, if you are using Meridian 2015 and don’t need any of the features in Meridian 2016, you don’t need to upgrade. Major performance issues, all security issues and most of the new configurations will be backported to that release until Meridian 2018 comes out.

Compare and contrast that with Horizon: once Horizon 18 was released all work stopped on Horizon 17. This means a much more rapid upgrade cycle. The upside being that Horizon users get to see all the new shiny features first.

Meridian 2016 is based on Horizon 17, which has been out since the beginning of the year and has been highly vetted. Users of Horizon 17 or earlier should have an easy migration path.

I’m very happy that the team has consistently delivered on both Horizon and Meridian releases. It is hoped that this new model will both keep OpenNMS on the cutting edge of the network monitoring space while providing a more stable option for those with environments that require it.

by Tarus at July 19, 2016 12:32 AM

July 18, 2016

This Week in OpenNMS

This Week in OpenNMS: July 18th, 2016

In the last week we worked on VMware monitoring, minion, packaging and startup, topology maps and BSM, node and remote poller maps, and documentation.

Github Project Updates

  • VMware Monitor

    Christian fixed the VMware monitor against VMware vSphere 6.

  • Minion

    Chandra did more work on the Minion cod…

July 18, 2016 02:27 PM

July 15, 2016

OpenNMS.org Blog

OpenNMS Events

Dev-Jam 2016

Our eleventh developers' conference at the University of Minnesota is in just a few days! This is the place to go if you want to spend a week with OpenNMS core contributors to learn or share your experiences. We don't make a set plan for Dev-Jam. Instead, we use this time of the year to…

July 15, 2016 11:50 PM

July 12, 2016

OpenNMS.org Blog

Releases

Release Announcements

July 12, 2016 08:42 AM

July 11, 2016

This Week in OpenNMS

This Week in OpenNMS: July 11th, 2016

In the last week we worked on minion, topology maps, BSM, UI updates, discovery, and event configuration.

Github Project Updates

  • Minion

    Chandra did more work on making detectors run on Minions.

  • Topology Maps and Business Service Monitor

    Markus, Dustin, Jesse, and Christian spent more time on cus…

July 11, 2016 03:05 PM

July 06, 2016

Adventures in Open Source

Upgrading Linux Mint 17.3 to Mint 18 In Place

Okay, I thought I could wait, but I couldn’t, so yesterday I decided to do an “in place” upgrade of my office desktop from Linux Mint 17.3 to Mint 18.

It didn’t go smoothly.

First, let me stress that the Linux Mint community strongly recommends a fresh install every time you upgrade from one release to another, and especially when it is from one major release, like Mint 17, to another, i.e. Mint 18. They ask you to backup your home directory and package lists, base the system and then restore. The problem is that I often make a lot of changes to my system which usually involves editing files in the system /etc directory, and this doesn’t capture that.

One thing I’ve always loved about Debian is the ability to upgrade in place (and often remotely) and this holds true for Debian-based distros like Ubuntu and Mint. So I was determined to try it out.

I found a couple of posts that suggested all you need to do is replace “rosa” with “sarah” in your repository file, and then do a “apt-get update” followed by an “apt-get dist-upgrade”. That doesn’t work, as I found out, because Mint 18 is based on Xenial (Ubuntu 16.04) and not Trusty (Ubuntu 14.04). Thus, you also need to replace every instance of “trusty” with “xenial” to get it to work.

Finally, once I got that working, I couldn’t get into the graphical desktop. Cinnamon wouldn’t load. It turns out Cinnamon is in a “backport” branch for some reason, so I had to add that to my repository file as well.

To save trouble for anyone else wanting to do this, here is my current /etc/apt/sources.list.d/official-package-repositories.list file:

deb http://packages.linuxmint.com sarah main upstream import backport #id:linuxmint_main
# deb http://extra.linuxmint.com sarah main #id:linuxmint_extra

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ xenial partner

Note that I commented out the “extra” repository since one doesn’t exist for sarah yet.

The upgrade took a long time. We have a decent connection to the Internet at the office and still it was over an hour to download the packages. There were a number of conflicts I had to deal with, but overall that part of the process was smooth.

Things seem to be working, and the system seems a little faster but that could just be me wanting it to be faster. Once again many thanks to the Mint team for making this possible.

by Tarus at July 06, 2016 03:02 PM

July 05, 2016

Adventures in Open Source

MC Frontalot and The Doubleclicks at All Things Open

I am happy to finally be able to confirm that MC Frontalot and his band, along with The Doubleclicks, will be playing an exclusive show during the All Things Open conference in October. The OpenNMS Group, at great expense (seriously, this is like our entire marketing budget for the year), has secured these two great acts to help celebrate all things open, and All Things Open.

MC Frontalot

I first met Damian (aka Frontalot) back in 2012 when I hired him to play at the Ohio Linuxfest. I subscribe to the Chris Dibona theory that open source business should give back to the community (he once described his job as “giving money to his friends”) and thus I thought it would be cool to introduce the übernerd Frontalot to the open source world.

We hit it off and now we’ve hired him a number of times. The last time was for OSCON in 2015, where we decided to bring in the entire band. What an eye-opening experience that was. A lot of tech firms talk about “synergy” – the situation when the whole is greater than the sum of its parts – but Front with his band takes the Frontalot experience to a whole new level.

Also at the OSCON show we were able to get The Doubleclicks to open. This duo of sisters, Angela and Aubrey Webber, bring a quirky sensibility to geek culture and were the perfect opening act.

Now, I love open source conferences, but I overdid it last year. So this year I’m on a hiatus and have been to *zero* shows, but I made an exception for All Things Open. First, it’s in my home city of Raleigh, North Carolina, which is also home to Red Hat. We like to think of the area as the hot bed of open source if not its heart. Second, the conference is organized by Todd Lewis, the Nicest Man in Open Source™. He spends his life making the world a better place and it is reflected in his show. We couldn’t think of a better way to celebrate that then to bring in some top entertainment for the attendees.

That’s right: there are only two ways to get in to this show. The easiest is to register for the conference, as the conference badge is what you’ll need to get in to the venue. The second way is to ask us nicely, but we’ll probably ask you to prove your dedication to free and open source software by performing a task along the lines of a Labor of Hercules, except ours will most likely be obscenely biological.

Seriously, if you care about FOSS you don’t want to miss All Things Open, so register.

If you are unfamiliar with the work of MC Frontalot, may I suggest you check out “Stoop Sale” and “Critical Hit“, or if you’re Old Skool like me, watch “It Is Pitch Dark“. His most recent album was about fairy tales (think of it as antique superhero origin stories). Check out “Start Over” or better yet the version of “Shudders” featuring the OpenNMS mascot, Ulf.

As for The Doubleclicks, you can browse most of their catalog on their website. One song that really resonates with me, especially at conferences, is “Nothing to Prove” which I hope they’ll do at the show.

Oh, and I saved the best for last, Front has been working on a free software song. Yup, he is bringing his mastery of rhymes to bear on the conflict between “free as in beer” and “free as in liberty” and its world premiere will be, you guessed it, at All Things Open.

The show will be held at King’s Barcade, just a couple of blocks from the conference, on Wednesday night the 26th of October. You don’t want to miss it.

by Tarus at July 05, 2016 01:50 PM

July 01, 2016

Adventures in Open Source

First Thoughts on Linux Mint 18 “Sarah”

I am a big fan of Linux Mint and I look forward to every release. This week Mint 18 “Sarah” was released. I decided to try it out on my Dell XPS 13 laptop since it is the easiest machine of mine to base and they really haven’t suggested an upgrade path. The one article I was able to find suggested a clean install, which is what I did.

First, I backed up my home directory, which is where most of my stuff lives, and I backed up the system /etc directory since I’m always making a change there and forgetting that I need it (usually concerning setting up the network interface as a bridge).

I then installed a fresh copy of Mint 18. Now they brag that the HiDPI support has improved (as I will grouse later, so does everyone else) but it hasn’t. So the first thing I did was to go to Preferences -> General and set “User interface scaling” to “Double”. This worked pretty well in Mint 17 and it seems to be fine in Mint 18 too.

I then did a basic install (I used a USB dongle to connect to a wired network since I didn’t want to mess with the Broadcom drivers at this point) and chose to encrypt the entire hard drive, which is something I usually do on laptops.

I hit my first snag when I rebooted. The boot cycle would hang at the password screen to decrypt the drive. In Mint 17 the password prompt would be on top of the “LM” logo. I would type in the password and it would boot. Now the “LM” logo has five little dots under it, like the Ubuntu boot screen, and the password prompt is below that. It’s just that it won’t accept input. If I boot in recovery mode, the password prompt is from the command line and works fine.

(sigh)

This seems to be a problem introduced with Ubuntu 16.04. Well, before I dropped back down to Mint 17 I decided to try out that distro as well as Kubuntu. My laptop was based in any case.

I ran into the usual HiDPI problems with both of those. I really, really want to like Kubuntu but with my dense screen I can’t make out anything and thus I can’t find the option to scale it. Ubuntu’s Unity was easier as it has a little sliding scaler, but when I got it to a resolution I liked many of the icon labels were clipped, just like last time I looked at it.

(sigh)

Then it dawned on my that I could just install Mint 18 but see if encrypting just my home directory would work. It did, so for now I’m using Mint 18 without full disk encryption. The next step was to install the proprietary Broadcom driver and then wireless worked.

Next, I edited /etc/fstab and added my backup NFS mount entry, mounted the drive and started restoring my home directory. That went smoothly, until I decided to reboot.

The laptop just hung at the boot screen.

Now there is a bug in Dell BIOS that if I try to boot with a USB network adapter plugged in, it erases the EFI entry for “ubuntu” and I have to go into setup and manually re-add it. Thus I was disconnecting the dongle for every reboot. On a whim I plugged it back in and the system booted. This led me to believe that there was an issue with the NFS mount in /etc/fstab, and that’s what the problem turned out to be.

The problem is that systemd likes to get its little hands into everything, so it tries to mount the volume before the wireless network is initialized. The solution is to add a special option that will cause systemd to automount the volume when it is first requested. Here is what worked:

172.20.10.5:/volume1/Backups /media/backups nfs noauto,x-systemd.automount,nouser,rsize=8192,wsize=8192,atime,rw,dev,exec,suid 0

The key bits are “noauto,x-systemd.automount”.

With that out of the way, I added mounts for my music and my video collection. That’s when I noticed a new weirdness in Cinnamon: dual icons on the desktop. I have set the desktop option to display icons for mounted file systems and now I get two of them for each remote mount point.

Double Desktop Icons

Annoying and I haven’t found a solution, so I just turned that option back off.

Now I was ready to play with the laptop. I’m often criticized for buying brand new hardware and expecting solid Linux support (yeah, you, Eric) but this laptop has been out for over a year. Still the trackpad is a little wonky – the cursor tends to jump to the lower right hand corner. Mint 18 ships with a 4.4 kernel but I had been using Mint 17 with a 4.6 kernel. One of the features of 4.6 is “Dell laptop improvements” so while I was hoping 4.4 would work for me (and that the features I needed would have been backported) it isn’t so. I installed 4.6 and my trackpad problems went away.

The final issue I needed to fix concerned ssh. I use ssh-agent and keys to access a lot of my remote servers, and it wasn’t working on Mint 18. Usually this is a permissions issue, but I compared the laptop to a working configuration on my desktop and the permissions were identical.

The error I got was:

debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0

It turns out that OpenSSH 7.0 seems to require that an “IdentityFile” parameter be expressly defined. I might be able to do this in ssh_config but instead I just created a ~/.ssh/config file with the line:

IdentityFile ~/.ssh/id_dsa_main

That got me farther. Now the error changed to:

debug1: Skipping ssh-dss key /home/tarus/.ssh/id_dsa_main - not in PubkeyAcceptedKeyTypes
debug1: Skipping ssh-dss key tarus@server1.sortova.com - not in PubkeyAcceptedKeyTypes

It seems the key I created back in 2001 is no longer considered secure. Since I didn’t want to go through the process of creating a new key just right now, I added another line to my ~/.ssh/config file:

IdentityFile ~/.ssh/id_dsa_main
PubkeyAcceptedKeyTypes=+ssh-dss

and now it works as expected. The weird part is that you would think this would be controlled on the server side, but the failure was coming from the client and thus I had to fix it on the laptop.

Now that it is installed and seems to be working, I haven’t really played around with Mint 18 much, so I may have to write another post soon. I do give them props for finally updating the default desktop wallpaper. I know the old wallpaper was traditional, but man was it dated.

This was a more complex upgrade than usual, and I don’t agree that you must base your system to do it, even from major release to major release. This isn’t Fedora. It’s based on Ubuntu which is based on Debian and I have rarely had issues with those upgrades. Usually you just change you repositories and then do “apt-get dist-upgrade”.

But … I might wait a week or two once they approve an upgrade procedure and let other people hit the bugs first, just in case. My desktops are more important to me than my laptop.

Hats off to the Mint team. I’m pretty tied to this operating system so I’m encouraged that it keeps moving forward as quickly as it does.

by Tarus at July 01, 2016 10:42 PM

June 30, 2016

Adventures in Open Source

The Inverter: Episode 70– Delicious Amorphous Tech Bubble

This week the Gang of Four is down to three as Stuart is off on holiday with his daughter in New York City. The episode runs 82 minutes long and I’m seeing a trend that the shorter episodes happen when Jeremy is out. I think it is because he clutters up the whole show with facts and reasoning.

The first segment asked the question “Are we in another tech bubble, and if so, what shape is it?” Of course we are in another tech bubble, as Jeremy so deftly demonstrates by comparing a number of start ups with over a billion dollars in valuation to real companies such as General Electric. They talk about a number of reasons for it, but I think they left an important one out: egos.

Look, growing up as a geek in the late 1970s early 1980s, we didn’t get much respect. Now with the various tech bubbles and widespread adoption of technology by the masses, geeks can at least be wealthy if not popular. But I think we still harbor, deep down, a resentment of the jocks and popular kids that results in problems with self-esteem. Take Marc Andreessen as an example. By most measures he’s successful, but take a look at him. He is not a pretty man, even though that male pattern baldness does suggest a big wee-wee. I think he still has something to prove which is why he dumps money into impossible things like uBeam which has something like a $500MM valuation. I think a lot of the big names in Silicon Valley have such a huge fear of missing out that they drive up valuations on companies without a business model and no hope of making a profit, much less a product.

But then Microsoft bought LinkedIn for $26.2B so what do I know.

Well, I do know the shape of the tech bubble: it’s a pear.

In the next segment the guys almost spooge all over themselves talking about the Pixel C tablet. I’ve never been a tablet guy. I have a six-inch … smart phone and it works fine for all of my mobile stuff. If I need anything bigger, I use a Dell XPS laptop running Mint. I do own a Nexus 10 but only use it to read eBooks that come in PDF format.

But all three of them really like it, meaning that if I decide to get a new tablet I’ll seriously consider it. Bryan did mention a couple of apps I was unfamiliar with, so I’ll have to check them out.

The first is called Termux and it provides a terminal emulator (already got one) but it adds a Linux environment as well. Could be cool. The other is DroidEdit which is a text editor for Android with lots of features, similar to vim or gedit on steroids. Bryan used these during his ill-fated attempt to live in the Linux shell for 30 days.

Apparently the Pixel C is magnetic, with magnets so strong you can hang it on your fridge. Add a webcam and I won’t need one of these.

The third segment was on Nextcloud. I’ll give the Nextcloud guys some props for getting press. This is something like the third in-depth interview I’ve listened to in the past three weeks. If you’ve been living under a rock and don’t know that Nextcloud is a fork of OwnCloud, start here. They interviewed Frank Karlitschek and Jan-Christoph Borchardt about the split and their plans.

I was hoping for more details on what caused the fork (because I’m a nosy bastard) but Jono starting off with something like a 90 second leading question to Frank that pretty much handed him an explanation. I was screaming “Objection! Leading the witness!” but it didn’t help. I guess it really doesn’t matter.

I do think I’d really enjoy meeting Frank. They are dedicated to keeping Nextcloud 100% open source (like good ol’ OpenNMS). They also brought up a point that is very hard to make with large, complex open source projects. Everyone will ask “How do you compare with OwnCloud” when the better question is “How do you compare to Dropbox”? At OpenNMS we are always getting the “How are you different from Nagios” when the better question is “How do you compare to Tivoli or OpenView”?

The fourth segment was on the XPrize Global Learning Project. The main takeaway I got from it was that the very nature of the XPrize doesn’t lend itself to the Open Source Way. The prize amount is so high it doesn’t encourage sharing. Still, a couple of projects are trying it so I wish them all the luck.

The final “segment” is the outro where the guys usually just shoot the breeze. They mentioned Stuart, visiting the US, getting slammed with Brexit questions, and I do find that amusing having traveled to the UK numerous times and been peppered with questions about stupid US politics. It’s one of the reasons I hope Donald Trump doesn’t get elected – I’m not ready to go back to claiming to be Canadian when I travel.

They also talked about fast food restaurants. I’m surprised In-N-Out Burger didn’t get a mention. From the moment a new one opens it is usually slammed at all hours. They did mention Chick-Fil-A, which I used to love until I boycotted them over their political activism. There is a pretty cool article on five incredible fast food chains you shouldn’t eat at (including Chick-Fil-A) and one you should but probably can’t (In-N-Out).

Overall I thought it was a solid show, although it needed more ginger. Good to see the guys getting back into form.

by Tarus at June 30, 2016 08:11 PM

June 24, 2016

Adventures in Open Source

OpenNMS and Elasticsearch

With Horizon 18 we added support for sending OpenNMS events into Elasticsearch. Unfortunately, it only works with Elasticsearch 1.0. Elasticsearch 2.0 and higher requires Camel 17, but OpenNMS can’t use it. I wondered why, and if you were wondering too, here is the answer from Seth:

Camel 17 has changed their OSGi metadata to only be compatible with Spring 4.1 and higher. We’re still using Spring 4.0 so that’s one problem. The second issue is that ActiveMQ’s OSGi metadata bans Spring 4.0 and higher. So currently, ActiveMQ and Camel are mutually incompatible with one another inside Karaf at any version higher than the ones that we are currently running.

The biggest issue is the ActiveMQ problem, I’ve opened this bug and it sounds like they’re going to address it in their next major release

So there you have it.

by Tarus at June 24, 2016 09:06 PM

June 17, 2016

Adventures in Open Source

The Inverter: Episode 69 – Bill and Ted and Jeremy and Bryan and Jono and Stuart’s Excellent Adventure

So the Gang of Four decided to actually produce a regular episode of Bad Voltage for the first time in, like, a month, so I decided to resurrect this little column making fun of them.

I am actually supposed to be on vacation this week, but for me vacation means working around the farm. I was working outside when the heat index hit 108.5F so while I was recovering from heat stroke I decided to give this week’s show a listen.

Clocking in at a healthy 75 minutes, give or take, it was an okay show, although the last fifteen minutes kind of wandered (much like most of this review).

The first segment concerned the creation of NextCloud as a fork of OwnCloud. I’ve already presented my thoughts on it from Bryan’s Youtube interview with the founders of NextCloud, and not much new was covered here. But it was a chance for all four of them to discuss it. One of the touted benefits of the new project is the lack of a contributor agreement. I don’t find this a good thing. Note that while I whole heartedly agree that many contributor agreements are evil, that doesn’t make them all evil. Take the OpenNMS contributor agreement. It’s pretty simple, and it protects both the contributor and the project. The most important feature, to me, is that the contributor states that they have a right to contribute the code to the project. I think that’s important, although if it were lacking or the contributor lied, the results would be the same (the infringing code would be removed from the application). It at least makes people think just a bit before sending in code.

Bryan made an offhand mention about trademarks in the same discussion, and I wasn’t sure what he meant by it. Does it mean NextCloud won’t enforce trademarks, or that there is an easy process that allows people to freely use them? I think enforcing trademarks is extremely important for open source companies. Otherwise, someone could take your code, crap all over it, and then ship it out under the same name. At OpenNMS we had issues with this back in 2005 but luckily since then it has been pretty quiet.

While there was even more speculation, no one really knows why the NextCloud fork happened. Some say it was that Frank Karlitschek was friends with Niels Mache of Spreed.me and wanted a partnership, but OwnCloud was against it. I think we’ll never know. Another suggestion that was been made is that it had to deal with the community of OwnCloud vs. the investors. Jono made the statement that VCs don’t take an active role in the community, but I have to disagree. My interactions with 90% of VCs have been an episode of Silicon Valley, and while they may not take an active role, you can expect them to say things like “These features over here will be part of our ‘enterprise’ version and not open, and make sure to hobble the ‘community’ version to drive sales, but other than that, run your community the way you want.”

One new point that was brought up was the business perception of the company. I think everyone who self identifies as an open source fan who is using OwnCloud will most likely switch to NextCloud since that is where the developers went, but will businesses be cautious about investing in NextCloud? The argument can be made that “who knows what will set Frank off next?” and the threat of NextNextCloud might worry some. I am not expecting this to happen (once bitten, twice shy, I bet Frank has learned a lot about what he wants out of his project) but it is a concern.

It is similar to Libreoffice. I don’t know anyone in the open source world using OpenOffice, but it is still huge outside of that world (I did a ride along with a friend who is police and was pleasantly surprised to see him bring up OpenOffice on his patrol car’s laptop).

It kind of reminds me when Google killed Reader and then announced Keep – seemed a bit ironic at the time. If a company can radically change or even remove a service you have come to rely on, will you trust them in the future?

The segment ended with a discussion of the early days of Ubuntu. Bryan made the claim that Ubuntu was made as an easier to use version Debian which Jono vehemently denied. He claimed the goal was to create a free, powerful desktop operating system. All I remember from those days were those kids from the United Colors of Bennetton ads on the covers of the free CDs.

The next piece was Bryan reviewing the latest Dell XPS 13 laptop. My last two laptops have been XPS 13 models and I love them. They ship with Linux (which I want to encourage) and I find they provide a great Linux desktop experience.

I got my newest one last year, and the main issue I’ve had is with the trackpad. Later kernels seem to have addressed most of my problems. I also dumped the Ubuntu 14.04 that shipped with it in exchange for Linux Mint, but I’m still running mainline kernels (4.6 at the moment). I’m eager for Mint 18 to release to see if the (rumoured) 4.4 kernel will work well (they keep backporting device driver changes) but outside of that I’ve had few problems.

Battery life is great, and the HiDPI screen is a big improvement over my old XPS 13. The main weirdness, for my model, is the location of the camera. In order to make the InfinityEdge display, they moved it to the bottom left of the screen so that the top bezel could be as thin as possible. It means people end up looking at the flabby underside of my chin instead of my face at times, but I use it so little that it doesn’t bother me much.

The third segment was about funding open source projects. It’s an eternal question: how do you pay for developers to work on free software? The guys didn’t really address it, focusing for the most part on programs that would provide some compensation for, say, travel to a conference, versus paying someone enough to make their mortgage. Stuart finally brought up that point but no real answers were offered.

The last fifteen minutes was the gang just shooting the breeze. Bryan used the term “duck fart” which apparently is a cocktail (sounds nasty, so don’t expect it on the cocktail blog). There is also, apparently, a science fiction novel called Bad Voltage that is not supposed to be that great, and the suggestion was made that the four of them should write their own version, but in the form of an “exquisite corpse” (my term, not theirs) where each would right their section independently and see what happens when it gets combined.

All in all, not a horrible show but not great, either. It is nice to have them all back together.

I’m eager to see how Bryan manages the next one, since he is spending 30 days solely in the Linux shell. How will Google Hangouts (which is what they use to make the show) work?

Curious minds want to know.

by Tarus at June 17, 2016 03:24 PM

June 09, 2016

Adventures in Open Source

Choose the Right Thermometer

Okay, so I have a love/hate relationship with Centurylink. Centurylink provides a DSL circuit to my house. I love the fact that I have something resembling broadband with 10Mbps down and about 1Mbps up. Now that doesn’t even qualify as broadband according to the FCC, but it beats the heck out of the alternatives (and I am jealous of my friends with cable who have 100Mbps down or even 300Mbps).

The hate part comes from reliability, which lately has been crap. This post is actually focused on OpenNMS so I won’t go into all of my issues, but I’ve been struggling with long outages in my service.

The latest issue is a new one: packet loss. Usually the circuit is either up or completely down, but for the last three days I’ve been having issues with a large percentage of dropped packets. Of course I monitor my home network from the office OpenNMS instance, and this will usually manifest itself with multiple nodeLostService events around HTTP since I have a personal web server that I monitor.

The default ICMP monitor does not measure packet loss. As long as at least one ping reply makes it, ICMP is considered up, so the node itself remains up. OpenNMS does have a monitor for packet loss called Strafeping. It sends out 20 pings in a short amount of time and then measures how long they take to come back. So I added it to the node for my home and I saw something unusual: a consistent 19 out of 20 lost packets.

Strafeping Graph

Power cycling the DSL modem seems to correct the problem, and the command line ping was reporting no lost packets, so why was I seeing such packet loss from the monitor? Was Strafeping broken?

While it is always a possibility, I didn’t think that Strafeping was broken, but I did check a number of graphs for other circuits and they looked fine. Thus it had to be something else.

This brings up a touchy subject for me: false positives. Is OpenNMS reporting false problems?

It reminds me of an event happened when I was studying physics back in the late 1980s. I was working with some newly discovered ceramic material that exhibited superconductivity at relatively high temperatures (around 92K). That temperature can be reached using liquid nitrogen, which was relatively easy to source compared to cooler liquids like liquid helium.

I needed to measure the temperature of the ceramic, but mercury (used in most common thermometers) is a solid at those temperatures, so I went to my advisor for suggestions. His first question to me was “What does a thermometer measure?”

I thought it was a trick question, so I answered “temperature” (“thermo” meaning temperature and meter meaning “to measure”). He replied, “Okay, smart guy, the temperature of what?”

That was harder to answer exactly, so I said vague things like the ambient environment, whatever it was next to, etc. He interrupted me and said “No, a thermometer measures one thing: the temperature of the thermometer”.

This was an important lesson, even though it seems obvious. In the case of the ceramic it meant a lot of extra steps to make sure the thermometer we were using (which was based on changes in resistance) was as close to the temperature of the material as possible.

What does that have to do with OpenNMS? Well, OpenNMS is like that thermometer. It is up to us to make sure that the way we decide to use it for monitoring is as close to our criteria as possible. A “false positive” usually indicates a problem with the method versus the tool – OpenNMS is behaving exactly as it should but we need to match it better to what we expect.

In my case I found out the router I use was limited by default to responding 1 ping per second (to avoid DDoS attacks I assume), so last night when I upped that to allow 20 pings per second Strafeping started to work as expected (as you can see in the graph above).

This allowed me to detect when my DSL circuit packet loss started again today. A little after 14:00 the system detected high packet loss. When this happened before, power cycling the modem seemed to fix it, so I headed home to do just that.

While I was on the way, around 15:30, the packet loss seemed to improve, but as you can see from the graph the ping times were all over the place (the line is green but there is a lot of extra “smoke” around it indicating a variance in the response times). I proactively power cycled the modem and things settled down. The Centurylink agent agreed to send me a new modem.

The point of this post is to stress that you need to understand how your monitoring tools actually work and you can often correct issues that make a monitor unusable and turn it into to something useful. Choose the right thermometer.

by Tarus at June 09, 2016 10:59 PM

June 03, 2016

Adventures in Open Source

Nextcloud, Never Stop Nexting!

It’s been awhile since I’ve posted a long, navel-gazing rant about the business of open source software. I’ve been trying to focus more on our business than spending time talking about it, but yesterday an announcement was made that brought all of it back to the fore.

TL;DR; Yesterday the Nextcloud project was announced as a fork of the popular ownCloud project. It was founded by many of the core developers of ownCloud. On the same day, the US corporation behind ownCloud shut it doors, citing Nextcloud as the reason. Is this a good thing? Only time will tell, but it represents the (still) ongoing friction between open source software and traditional software business models.

I was looking over my Google+ stream yesterday when I saw a post by Bryan Lunduke announcing a special “secret” broadcast coming at 1pm (10am Pacific). As I am a Lundookie, I made a point to watch it. I missed the start of it but when I joined it turned out to be an interview with the technical team behind a new project called Nextcloud, which was for the most part the same team behind ownCloud.

Nextcloud is a fork, and in the open source world a “fork” is the nuclear option. When a project’s community becomes so divided that they can’t work things out, or they don’t want to work things out for whatever reasons, there is the option to take the code and start a new project. It always represents a failure but sometimes it can’t be helped. The two forks I can think of off hand, Joomla from Mambo and Icinga from Nagios, both resulted in stronger projects and better software, so maybe this will happen here.

In part I blame the VC model for financing software companies for the fork. In the traditional software model, a bunch of money is poured into a company to create software, but once that software is created the cost of reproducing it is near zero, so the business model is to sell licenses to the software to the end users in order to generate revenue in the future. This model breaks when it comes to free and open source software, since once the software is created there is no way to force the end users to pay for it.

That still doesn’t keep companies from trying. This resulted in a trend (which is dying out) called “open core” – the idea that some software is available under an open source license but certain features are kept proprietary. As Brian Prentice at Gartner pointed out, there is little difference between this and just plain old proprietary software. You end up with the same lack of freedom and same vendor lock in.

Those of us who support free software tend to be bothered by this. Few things get me angrier than to be at a conference and have someone go “Oh, this OpenNMS looks nice – how much is the enterprise version?”. We only have the enterprise version and every bit of code we produce is available under an open source license.

Perhaps this happened at ownCloud. When one of the founders was on Bad Voltage awhile back, I had this to say about the interview:

The only thing that wasn’t clear to me was the business model. The founder Frank Karlitschek states that ownCloud is not “open core” (or as we like to call it “fauxpensource“) but I’m not clear on their “enterprise” vs. “community” features. My gut tells me that they are on the side of good.

Frank seemed really to be on the side of freedom, and I could see this being a problem if the rest of the ownCloud team wasn’t so dedicated.

On the interview yesterday I asked if Nextcloud was going to have a proprietary (or “enterprise”) version. As you can imagine I am pretty strongly against that.

The reason I asked was from this article on the new company that stated:

There will be two editions of Nextcloud: the free of cost community edition and the paid enterprise edition. The enterprise edition will have some additional features suited for enterprise customers, but unlike ownCloud, the community and enterprise editions for Nextcloud will borrow features from each other more freely.

Frank wouldn’t commit to making all of Nextcloud open, but he does seem genuinely determined to make as much of it open as possible.

Which leads me to wonder, what’s stopping him?

It’s got to be the money guys, right? Look, nothing says that open source companies can’t make money, it’s just you have to do it differently than you would with proprietary software. I can’t stress this enough – if your “open source” business model involves selling proprietary software you are not an open source company.

This is one of the reasons my blood pressure goes up whenever I visit Silicon Valley. Seriously, when I watch the HBO show to me it isn’t a comedy, it’s a documentary (and the fact that I most closely identify with the character of Erlich doesn’t make me feel all that better about myself).

I want to make things. I want to make things that last. I can remember the first true vacation I took, several years after taking over the OpenNMS project when it had grown it to the point that it didn’t need me all the time. I was so happy that it had reached that point. I want OpenNMS to be around well after I’m gone.

It seems, however, that Silicon Valley is more interested in making money rather than making things. They hunt “unicorns” – startups with more than a $1 billion valuation – and frequently no one can really determine how they arrive at that valuation. They are so consumed with jargon that quite often you can’t even figure out what some of these companies do, and many of them fade in value after the IPO.

I can remember a keynote at OSCON by Martin Mickos about Eucalyptus, and how it was “open source” but of course would have proprietary code because “well, we need to make money”. He is one of those Silicon Valley darlings who just doesn’t get open source, and it’s why we now have OpenStack.

The biggest challenge to making money in open source is educating the consumer that free software doesn’t mean free solution. Free software can be very powerful but it comes with a certain level of complexity, and to get the most out of it you have to invest in it. The companies focused on free and open source software make money by providing products that address this complexity.

Traditionally, this has been service and support. I like to say at OpenNMS we don’t sell software, we sell time. Since we do little marketing, all of our users are self selecting (which makes them incredibly intelligent and usually quite physically beautiful) and most of them have the ability to figure out their own issues. But by working with us we can greatly shorten the time to deploy as well as make them aware of options they may not know exist.

In more recent times, there is also the option to offer open source software as a service. Take WordPress, one of my favorite examples. While I find it incredibly easy to install an instance of WordPress, if you don’t want to or if you find it difficult, you can always pay them to host it for you. Change your mind later? You can export it to an instance you control.

The market is always changing and with it there is opportunity. As OpenNMS is a network monitoring platform and the network keeps getting larger, we are focusing on moving it to OpenStack for ultimate scalability, and then coupled with our Minions we’ll have the ability to handle an “Internet of Things” amount of devices. At each point there are revenue opportunities as we can help our clients get it set up in their private cloud, or help them by letting them outsource some or all of it, such as Newts storage. The beauty is that the end user gets to own their solution and they always have the option of bringing it back in house.

None of these models involves requiring a license purchase as part of the business plan. In fact, I can foresee a time in the near future where purchasing a proprietary software product without fully exploring open source alternatives will be considered a breach of fiduciary responsibility.

And these consumers will be savvy enough to demand pure open source solutions. That is why I think Nextcloud, if they are able to focus their revenue efforts on things such as an appliance, has a better chance of success than a company like ownCloud that relies on revenue from software licensing sales. The fact that most of the creators have left doesn’t help them, either.

The lack of revenue from licenses sales makes most VCs panic, and it looks like that’s exactly what happened with the US division of ownCloud:

Unfortunately, the announcement has consequences for ownCloud, Inc. based in Lexington, MA. Our main lenders in the US have cancelled our credit. Following American law, we are forced to close the doors of ownCloud, Inc. with immediate effect and terminate the contracts of 8 employees. The ownCloud GmbH is not directly affected by this and the growth of the ownCloud Foundation will remain a key priority.

I look forward to the time in the not too distant future when the open core model is seen as quaint as selling software on floppy disks at the local electronics store, and I eagerly await the first release of Nextcloud.

by Tarus at June 03, 2016 07:19 PM

June 01, 2016

OpenNMS Foundation Europe

Call for Papers – OUCE 2016

The 2016 edition of the OpenNMS User Conference Europe (OUCE) will take place at University of Applied Science in Fulda.

Where do I find more information?

You find more detailed information on our conference page at https://ouce.opennms.eu

When?

Tuesday September, 13th 2016 until Thursday September, 15th 2016

Where?

Hochschule Fulda
Leipziger Straße 123
36037 Fulda, Germany

OUCE 2016 - Hochschule Fulda

Show embedded map in full-screen mode

Tickets?

This page requires frame support. Please use a frame compatible browser to see the ticket sales module.

Try out the online event registration system from XING Events.

conference – Online Event Management with the ticketing solution from XING Events

Submit a Paper or Workshop

In order to submit your proposal, please go to the registration page, register with your email address and send us your proposal. The personal information are only used for the purpose of organizing the conference.

If you are curious about what others have already submitted, have a look at the preliminary schedule.

Your contribution matters

The conference is mainly driven by user contributtions who want to share their experience with open source monitoring solutions, especially OpenNMS. Your proposal can have any form you like, i.e.:

Regular talks
Share your experience in a user story and tell others about your way to solve a specific monitoring issue.
Workshops
Teach others about a technology, discuss a topic, ask for feedback or develop new ways to solve monitoring problems.
Community events
Bring some bottles of beer your favorite drink and help to bring the community together.

The focus of the conference covers the following topics:

Technology
Network management is hard. It has requirements to scalability and needs to integrate with a lot of other tools. If you’re building a business integration and you have extended OpenNMS, this is the track to show others what you have achieved and how OpenNMS works at its best.
Business
This track focuses on talks for people who want to share their experience using Open Source monitoring like OpenNMS in commercial environments. You can show how Open Source has effects on return of investment and total cost of ownership in your business or how you build your business in cooperation with Open Source monitoring solutions.
Projects
Everything has a start and nothing is perfect. The great thing in a community is to share knowledge and get input from others for real world use cases. If you’re building a cool solution with or around OpenNMS or with other tools and would like share it, this is the place to do so. You can inspire other people following your path and encourage developers to think about further improvements. You have the possibility to meet other people to improve your solution. Even if you have installed bleeding-edge stuff or you made some experiences in the wild network management world, please submit your talk or workshop here.

We are very flexible on the format of your submission but we recommend the following formats:

  • A regular talk should have a duration of about 45 minutes.
  • A lightning talk should not exceed 10 minutes.
  • Workshops should last for a maximum of 90 minutes. If you have special requirements, pleas let us know.
  • Other events can happen in workshop rooms or in the lounge. These events should be coordinated with the board upfront.

You have questions regarding the conference, please register at the OUCE mailing list, where you can get in contact with the organization team.

Recording

Talks will be recorded and made available on our website after the conference (unless you tell us that you’d prefer not to have them published). All recordings will be available without any special logins and will be found in our OpenNMS YouTube channel.

Who is attending?

The conferences will be attended by OpenNMS users, prospect users and open source interested folks from all over the world. You can meet OpenNMS developers as well as other experts of the network management domain.

The 2016 conference is limited to a maximum of 70 participants. The profile is ranging from system administrators to programmers to IT managers.

Call for paper proceeding

The call for paper is open until August, 1st 2016. The submitted talks will be evaluated by the conference board.

After the review period, ending August, 1st, you will get notified whether your proposal is accepted or not. Please take notice of our submission policies as well:

  • your speech will be contributed under the following license: CC BY-SA or another permissive creative commons licence
  • your speech will be harassment-free for everyone

by Ronny Trommer at June 01, 2016 02:58 PM

May 26, 2016

Adventures in Open Source

Emley Moor, Kirklees, West Yorkshire

I spent last week back in the United Kingdom. I always find it odd to travel to the UK. When I’m in, say, Germany or Spain, I know I’m in a different country. With the UK I sometimes forget and hijinks ensue. As Shaw may have once said, we are two countries separated by a common language.

Usually I spend time in the South, mainly Hampshire, but this trip was in Yorkshire, specifically West Yorkshire. I was looking forward to this for a number of reasons. For example, I love Yorkshire Pudding, and the Four Yorkshiremen is my favorite Monty Python routine.

Also, it meant that I could fly into Manchester Airport and miss Heathrow. Well, I didn’t exactly miss it.

I was visiting a big client that most people have never heard of, even though they are probably an integral part of your life if you live in the UK. Arqiva provides the broadcast infrastructure for much of the television and mobile phone industry in the country, as well as being involved in deploying networks for projects such as smart metering and the Internet of Things.

We were working at the Emley Moor location, which is home to the Emley Moor Mast. This is the largest freestanding structure in Britain (and third in the European Union). With a total height of 1084 feet, it is higher than the Eiffel Tower and almost twice as high as the Washington Monument.

Emily Moor Mast View

The mast was built in 1971 to replace a metal lattice tower that fell, due to a combination of ice and wind, in 1969. I love the excerpt from the log book mentioned in the Wikipedia article:

  • Day: Lee, Caffell, Vander Byl
  • Ice hazard – Packed ice beginning to fall from mast & stays. Roads close to station temporarily closed by Councils. Please notify councils when roads are safe (!)
  • Pye monitor – no frame lock – V10 replaced (low ins). Monitor overheating due to fan choked up with dust- cleaned out, motor lubricated and fan blades reset.
  • Evening :- Glendenning, Bottom, Redgrove
  • 1,265 ft (386 m) Mast :- Fell down across Jagger Lane (corner of Common Lane) at 17:01:45. Police, I.T.A. HQ, R.O., etc., all notified.
  • Mast Power Isolator :- Fuses removed & isolator locked in the “OFF” position. All isolators in basement feeding mast stump also switched off. Dehydrators & TXs switched off.

They still have that log book, open to that page.

Emily Moor Log Book

If you have 20 minutes, there is a great old documentary on the fall of the old tower and the construction of the new mast.

On my last day there we got to go up into the structure. It’s pretty impressive:

Emily Moor Mast Up Close

and the inside looks like something from a 1970s sci-fi movie:

Emily Moor Mast Inside

The article stated that it takes seven minutes to ride the lift to the top. I timed it at six minutes, fifty-seven seconds, so that’s about right (it’s fifteen seconds quicker going down). I was working with Dr. Craig Gallen who remembers going up in the open lift carriage, but we were in an enclosed car. It’s very small and with five of us in it I will admit to a small amount of claustrophobia on the way up.

But getting to the top is worth it. The view is amazing:

View from Emily Moor Mast

It was a calm day but you could still feel the tower sway a bit. They have a plumb bob set up to measure the drift, and it was barely moving while we were up there. Toby, our host, told of a time he had to spend seven hours installing equipment when the bob was moving four to five inches side to side. They had to move around on their hands and knees to avoid falling over.

Plumb Bob

I’m glad I wasn’t there on that day, but our day was fantastic. Here is a shot of the parking lot where the first picture (above) was taken.

View of Emily Moor Parking Lot

I had a really great time on this trip. The client was amazing, and I really like the area. It reminds me a bit of the North Carolina mountains. I did get my Yorkshire Pudding in Yorkshire (bucket list item):

Yorkshire Pudding in Yorkshire

and one evening Craig and I got to meet up with Keith Spragg.

Keith Spragg and Craig Gallen

Keith is a regular on the OpenNMS IRC channel (#opennms on freenode.net), and he works for Southway Housing Trust. They are a non-profit that manages several thousand homes, and part of that involves providing certain IT services to their tenants. They are mainly a Windows/Citrix shop but OpenNMS is running on one of the two Linux machines in their environment. He tried out a number of solutions before finding that OpenNMS met his needs, and he pays it forward by helping people via IRC. It always warms my heart to see OpenNMS being used in such places.

I hope to return to the area, although I was glad I was there in May. It’s around 53 degrees north latitude, which puts it level with the southern Alaskan islands. It would get light around 4am, and in the winter ice has been known to fall in sheets from the Mast (the walkways are covered to help protect the people who work there).

I bet Yorkshire Pudding really hits the spot on a cold winter’s day.

by Tarus at May 26, 2016 10:21 PM

May 16, 2016

OpenNMS Foundation Europe

[Release] – OpenNMS Horizon 18.0.0

We welcome our new release of OpenNMS Horizon 18.0.0 with code name Tardigrade, named after a perhaps one of the most durable known organisms. This release introduces a few really cool new features.

Business Service Monitoring

To be able to get your monitored assets into Business Service related context the “Business Service Monitoring (BSM)” can be used. The goal of the Business Service Monitor is to provide a high-level correlation of business rules, and to display them in the topology map. In the BSM, you define a Business Service that encompasses a set of entities (nodes, interfaces, services, and other Business Services) and the relationships among them. The alarm states of the component entities are rolled up in a map-reduce fashion to drive a state machine. The state of a Business Service is reflected in a new type of event, and can also be visualized (along with the service hierarchy) in the existing OpenNMS topology map. A Business Service can create his own Alarms which can be used in workflows. The graph representation allows to analyze the root cause from the BSM to the technical service and the other way around – which Business Service is impacted by a specific technical service outage.

For details on using the BSM, see the User and Administrators Guide.

ElasticSearch 1.x Event Forwarder

We have added the possibility to forward OpenNMS Events and Alarms into ElasticSearch for analyzing and plotting with other tools like Kibana or Grafana. For more details see the Admin Guide. Big thank you to our community contributor Umberto Nicoletti started this thing.

OpenNMS Properties are modular

Now most properties set in the opennms.properties file can be instead overriden by creating a file in the ${OPENNMS_HOME}/etc/opennms.properties.d directory. It makes it maintenance friendlier and makes it easier to use configuration management tools like SaltStack, Ansible, Chef or Puppet.

Notification for Slack and Mattermost

With the new notification strategies it is now possible to send monitoring notifications to Slack and Mattermost.

OpenNMS Plugin Manager

An API for adding 3rd-party “plugins” to OpenNMS. The core of a tool for adding plugins into OpenNMS has been included in Horizon 18. This provides a set of tools for finding and adding plugins to be loaded into the OpenNMS OSGi container.

Requisition UI improvements

A huge number of improvements have gone into the requisition UI. Also, the old “Quick-Add Node” functionality has been reimplemented using the same backend as the requisition UI.

“Scan Report” Remote Poller GUI

A new front-end for the remote poller that lets you perform a single scan and get a pass/fail report in a GUI has been added. You can enable this alternate UI with the “-s” option on the remote poller CLI.

Topology UI Updates

As part of the BSM work, the topology UI has been vastly improved for both performance and usability.

TSRM Ticketing Plugin

We have added a new Ticketing Plugin for IBM Tivoli Service Request Manager (TSRM).
Information on configuring the TSRM ticketing plugin can be found in the Administrators Guide.

Collect anonymous usage statistics

To get a better idea of how to estimate hardware requirements and the performance characteristics of OpenNMS, we wrote a tool to occasionally submit anonymous diagnostic information about your OpenNMS install. It will submit information like the number of nodes, alarms, etc. as well as some basic system information to our servers for statistical reasons.

When a user with the Admin role logs into the system for the first time, they will be prompted as to whether or not they want to opt-in to publish these statistics. Statistics will only be published once an Administrator has opted-in.
These statistics are visualized on stats.opennms.org.

OpenNMS Data Source is Grafana 3 compliant

We have uploaded the OpenNMS Data Source to Grafana Plugin Platform which allows now easy setup and install with a simple sudo grafana-cli plugins install opennms-datasource.

Vagrant Boxes Refreshed

We have updated the Vagrant boxes which are uploaded to the Atlas Platform. They are updated with latest Horizon 18 and pre-configured Grafana 3 as a test, dev or play environment.

Beside that there are lot of bugs fixed and you can find all of it more detailed in our Release Note.

Happy Updating and Thank you to all contributors to make this great release happen.

by Ronny Trommer at May 16, 2016 05:22 PM

May 11, 2016

Adventures in Open Source

OpenNMS Horizon 18 “Tardigrade” Is Now Available

I am extremely happy to announce the availability of Horizon 18, codenamed “Tardigrade”. Ben is responsible for naming our releases and he’s decided that the theme for Horizon 18 will be animals. The name “Tardigrade” was suggested in the IRC channel by Uberpenguin, and while they aren’t the prettiest things, Wikipedia describes them as “perhaps the most durable of known organisms” so in the context of OpenNMS that is appropriate.

OpenNMS Horizon 18

I am also happy to see the Horizon program working. When we split OpenNMS into Horizon and Meridian, the main reason was to drive faster development. Now instead of a new stable release every 18 months, we are getting them out every 3 to 4 months. And these are great releases – not just major releases in name only.

The first thing you’ll notice if you log in to Horizon 18 as a user in the admin role is that we’ve added a new “opt-in” feature that let’s us know a little bit about how OpenNMS is being used by people. We hope that most of you will choose to send us this information, and in the spirit of the Open Source Way we’ve made all of the statistics available publicly.

OpenNMS Opt-In Screen

One of the key things we are looking for is the list of SNMP Object IDs. This will let us know what devices are being monitored by our users and to increase their level of support. Of course, this requires that your OpenNMS instance be able to reach the stats server on the Internet, and you can change your choice at any time on the Configuration admin page under “Data Choices”. It will only send this information once every 24 hours, so we don’t expect it to impact network traffic at all.

Once you’ve opted in, the next thing you’ll probably notice is new problem lists on the home page listing “services” and “applications”.

OpenNMS BSM Problem Lists

This related to the major feature addition in Horizon 18 of the Business Service Monitor (BSM).

OpenNMS BSM OpenDaylight

As people move from treating servers as pets to treating them like cattle, the emphasis has shifted to understanding how well applications and microservices are running as a whole instead of focusing on individual devices. The BSM allows you to configure these services and then leverage all the usual OpenNMS crunchy goodness as you would a legacy service like HTTP running on a particular box. The above screenshot comes from some prototype work Jesse has been doing with integrating OpenNMS with OpenDaylight. As you can see at a glance, while the ICMP service is down on a particular device, the overall Network Fabric is still functioning perfectly.

Another thing I’m extremely proud of is the increase in the quality of documentation. Ronny and the rest of the documentation team are doing a great job, and we’ve made it a requirement that new features aren’t complete without documentation. Please check out the release notes as an example. It contains a pretty comprehensive lists of changes in 18.

A few I’d like to point out:

Horizon 17 is one of the most powerful and stable releases of OpenNMS ever, and we hope to continue that tradition with Horizon 18. Hats off to the team for such great work.

Here is a list of all the issues addressed in Horizon 18:

Release Notes – OpenNMS – Version 18.0.0

Bug

  • [NMS-3489] – "ADD NODE" produces "too much" config
  • [NMS-4845] – RrdUtils.createRRD log message is unclear
  • [NMS-5788] – model-importer.properties should be deprecated and removed
  • [NMS-5839] – Bring WaterfallExecutor logging on par with RunnableConsumerThreadPool
  • [NMS-5915] – The retry handler used with HttpClient is not going to do what we expect
  • [NMS-5970] – No HTML title on Topology Map
  • [NMS-6344] – provision.pl does not import requisitions with spaces in the name
  • [NMS-6549] – Eventd does not honor reloadDaemonConfig event
  • [NMS-6623] – Update JNA.jar library to support ARM based systems
  • [NMS-7263] – jaxb.properties not included in jar
  • [NMS-7471] – SNMP Plugin tests regularly failing
  • [NMS-7525] – ArrayOutOfBounds Exception in Topology Map when selecting bridge-port
  • [NMS-7582] – non RFC conform behaviour of SmtpMonitor
  • [NMS-7731] – Remote poller dies when trying to use the PageSequenceMonitor
  • [NMS-7763] – Bridge Data is not Collected on Cisco Nexus
  • [NMS-7792] – NPE in JmxRrdMigratorOffline
  • [NMS-7846] – Slow LinkdTopologyProvider/EnhancedLinkdTopologyProvider in bigger enviroments
  • [NMS-7871] – Enlinkd bridge discovery creates erroneous entries in the Bridge Forwarding Tables of unrelated switches when host is a kvm virtual host
  • [NMS-7872] – 303 See Other on requisitions response breaks the usage of the Requisitions ReST API
  • [NMS-7880] – Integration tests in org.opennms.core.test-api.karaf have incomplete dependencies
  • [NMS-7918] – Slow BridgeBridgeTopologie discovery with enlinkd.
  • [NMS-7922] – Null pointer exceptions with whitespace in requisition name
  • [NMS-7959] – Bouncycastle JARs break large-key crypto operations
  • [NMS-7967] – XML namespace locations are not set correctly for namespaces cm, and ext
  • [NMS-7975] – Rest API v2 returns http-404 (not found) for http-204 (no content) cases
  • [NMS-8003] – Topology-UI shows LLDP links not correct
  • [NMS-8018] – Vacuumd sends automation events before transaction is closed
  • [NMS-8056] – opennms-setup.karaf shouldn't try to start ActiveMQ
  • [NMS-8057] – Add the org.opennms.features.activemq.broker .xml and .cfg files to the Minion repo webapp
  • [NMS-8058] – Poll all interface w/o critical service is incorrect
  • [NMS-8072] – NullPointerException for NodeDiscoveryBridge
  • [NMS-8079] – The OnmsDaoContainer does not update its cache correctly, leading to a NumberFormatException
  • [NMS-8080] – VLAN name is not displayed
  • [NMS-8086] – Provisioning Requisitions with spaces in their name.
  • [NMS-8096] – JMX detector connection errors use wrong log level
  • [NMS-8098] – PageSequenceMonitor sometimes gives poor failure reasons
  • [NMS-8104] – init script checkXmlFiles() fails to pick up errors
  • [NMS-8116] – Heat map Alarms/Categories do not show all categories
  • [NMS-8118] – CXF returning 204 on NULL responses, rather than 404
  • [NMS-8125] – Memory leak when using Groovy + BSF
  • [NMS-8128] – NPE if provisioning requisition name has spaces
  • [NMS-8137] – OpenNMS incorrectly discovers VLANs
  • [NMS-8146] – "Show interfaces" link forgets the filters in some circumstances
  • [NMS-8167] – Cannot search by MAC address
  • [NMS-8168] – Vaadin Applications do not show OpenNMS favicon
  • [NMS-8189] – Wrong interface status color on node detail page
  • [NMS-8194] – Return an HTTP 303 for PUT/POST request on a ReST API is a bad practice
  • [NMS-8198] – Provisioning UI indication for changed nodes is too bright
  • [NMS-8208] – Upgrade maven-bundle-plugin to v3.0.1
  • [NMS-8214] – AlarmdIT.testPersistManyAlarmsAtOnce() test ordering issue?
  • [NMS-8215] – Chart servlet reloads Notifd config instead of Charts config
  • [NMS-8216] – Discovery config screen problems in latest code
  • [NMS-8221] – Operation "Refresh Now" and "Automatic Refresh" referesh the UI differently
  • [NMS-8224] – JasperReports measurements data-source step returning null
  • [NMS-8235] – Jaspersoft Studio cannot be used anymore to debug/create new reports
  • [NMS-8240] – Requisition synchronization is failing due to space in requisition name
  • [NMS-8248] – Many Rcsript (RScript) files in OPENNMS_DATA/tmp
  • [NMS-8257] – Test flapping: ForeignSourceRestServiceIT.testForeignSources()
  • [NMS-8272] – snmp4j does not process agent responses
  • [NMS-8273] – %post error when Minion host.key already exists
  • [NMS-8274] – All the defined Statsd's reports are being executed even if they are disabled.
  • [NMS-8277] – %post failure in opennms-minion-features-core: sed not found
  • [NMS-8293] – Config Tester Tool doesn't check some of the core configuration files
  • [NMS-8298] – Label of Vertex is too short in some cases
  • [NMS-8299] – Topology UI recenters even if Manual Layout is selected
  • [NMS-8300] – Center on Selection no longer works in STUI
  • [NMS-8301] – v2 Rest Services are deployed twice to the WEB-INF/lib directory
  • [NMS-8302] – Json deserialization throws "unknown property" exception due to usage of wrong Jax-rs Provider
  • [NMS-8304] – An error on threshd-configuration.xml breaks Collectd when reloading thresholds configuration
  • [NMS-8313] – Pan moving in Topology UI automatically recenters
  • [NMS-8314] – Weird zoom behavior in Topology UI using mouse wheel
  • [NMS-8320] – Ping is available for HTTP services
  • [NMS-8324] – Friendly name of an IP service is never shown in BSM
  • [NMS-8330] – Switching Topology Providers causes Exception
  • [NMS-8335] – Focal points are no longer persisted
  • [NMS-8337] – Non-existing resources or attributes break JasperReports when using the Measurements API
  • [NMS-8353] – Plugin Manager fails to load
  • [NMS-8361] – Incorrect documentation for org.opennms.newts.query.heartbeat
  • [NMS-8371] – The contents of the info panel should refresh when the vertices and edges are refreshed
  • [NMS-8373] – The placeholder {diffTime} is not supported by Backshift.
  • [NMS-8374] – The logic to find event definitions confuses the Event Translator when translating SNMP Traps
  • [NMS-8375] – License / copyright situation in release notes introduction needs simplifying
  • [NMS-8379] – Sluggish performance with Cassandra driver
  • [NMS-8383] – jmxconfiggenerator feature has unnecessary includes
  • [NMS-8386] – Requisitioning UI fails to load in modern browsers if used behind a proxy
  • [NMS-8388] – Document resources ReST service
  • [NMS-8389] – Heatmap is not showing
  • [NMS-8394] – NoSuchElement exception when loading the TopologyUI
  • [NMS-8395] – Logging improvements to Notifd
  • [NMS-8401] – There are errors on the graph definitions for OpenNMS JMX statistics
  • [NMS-8403] – Document styles of identifying nodes in resource IDs

Enhancement

  • [NMS-2504] – Create a better landing page for Configure Discovery aftermath
  • [NMS-4229] – Detect tables with Provisiond SNMP detector
  • [NMS-5077] – Allow other services to work with Path Outages other than ICMP
  • [NMS-5905] – Add ifAlias to bridge Link Interface Info
  • [NMS-5979] – Make the Provisioning Requisitions "Node Quick-Add" look pretty
  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7446] – Enhance Bridge Link Object Model
  • [NMS-7447] – Update BridgeTopology to use the new Object Model
  • [NMS-7448] – Update Bridge Topology Discovery Strategy
  • [NMS-7756] – Change icon for Dell PowerConnector switch
  • [NMS-7798] – Add Sonicwall Firewall Events
  • [NMS-7903] – Elasticsearch event and alarm forwarder
  • [NMS-7950] – Create an overview for the developers guide
  • [NMS-7965] – Add support for setting system properties via user supplied .properties files
  • [NMS-7976] – Merge OSGi Plugin Manager into Admin UI
  • [NMS-7980] – provide HTTPS Quicklaunch into node page
  • [NMS-8015] – Remove Dependencies on RXTX
  • [NMS-8041] – Refactor Enhanced Linkd Topology
  • [NMS-8044] – Provide link for Microsoft RDP connections
  • [NMS-8063] – Update asciidoc dependencies to latest 1.5.3
  • [NMS-8076] – Allow user to access local documentation from OpenNMS Jetty Webapp
  • [NMS-8077] – Add NetGear Prosafe Smart switch SNMP trap events and syslog events
  • [NMS-8092] – Add OpenWrt syslog and related event definitions
  • [NMS-8129] – Disallow restricted characters from foreign source and foreign ID
  • [NMS-8149] – Update asciidoctorj to 1.5.4 and asciidoctorjPdf to 1.5.0-alpha.11
  • [NMS-8152] – Collect and publish anonymous statistics to stats.opennms.org
  • [NMS-8160] – Remove Quick-Add node to avoid confusions and avoid breaking the ReST API
  • [NMS-8163] – Requisitions UI Enhancements
  • [NMS-8179] – ifIndex >= 2^31
  • [NMS-8182] – Add HTTPS as quick-link on the node page
  • [NMS-8205] – Generate events for alarm lifecycle changes
  • [NMS-8209] – Upgrade junit to v4.12
  • [NMS-8210] – Add support for calculating the derivative with a Measurements API Filter
  • [NMS-8211] – Add support for retrieving nodes with a filter expression via the ReST API
  • [NMS-8218] – External event source tweaks to admin guide
  • [NMS-8219] – Copyright bump on asciidoc docs
  • [NMS-8225] – Integrate the Minion container and packages into the mainline OpenNMS build
  • [NMS-8226] – Upgrade SNMP4J to version 2.4
  • [NMS-8238] – Topology providers should provide a description for display
  • [NMS-8251] – Parameterize product name in asciidoc docs
  • [NMS-8259] – Cleanup testdata in SnmpDetector tests
  • [NMS-8265] – SNMP collection systemDefs for Cisco ASA5525-X, ASA5515-X
  • [NMS-8266] – SNMP collection systemDefs for Juniper SRX210he2, SRX100h
  • [NMS-8267] – Create documentation for SNMP detector
  • [NMS-8271] – Enable correlation engines to register for all events
  • [NMS-8296] – Be able to re-order the policies on a requisition through the UI
  • [NMS-8334] – Implement org.opennms.timeseries.strategy=evaluate to facilitate the sizing process
  • [NMS-8336] – Set the required fields when not specified while adding events through ReST
  • [NMS-8349] – Update screenshots with 18 theme in user documentation
  • [NMS-8365] – Add metric counter for drop counts when the ring buffer is full
  • [NMS-8377] – Applying some organizational changes on the Requisitions UI (Grunt, JSHint, Dist)

Story

Task

  • [NMS-8236] – Move the "vaadin-extender-service" module to opennms code base

by Tarus at May 11, 2016 05:03 PM

April 26, 2016

OpenNMS Foundation Europe

DevJam 2016 – Travel Bursary

Do you want to be a part of the great OpenNMS DevJam 2016? The best way for someone to learn to develop and contribute to OpenNMS is to attend DevJam. If you want to improve your abilities and meet other OpenNMS developers from all over the world, then you need to be at OpenNMS DevJam 2016!

The OpenNMS DevJam is THE event of the year for contributors and developers of the OpenNMS Project. Contributors and developers of OpenNMS from all over the world meet at the University of Minnesota. The event is all about to learn, hack, code, talk and have fun around the OpenNMS project.

When?
Sunday, July 24, 2016 through Saturday, July 30, 2016

Where?
University of Minnesota – Mark G. Yudof Hall
220 Delaware St. SE
Minneapolis, MN 55455

More details?
http://www.opennms.org/wiki/Dev-Jam_2016

If you need to convince your manager? Mike Huot wrote a proposal which can help you to get there.

Deadline for the proposal is 31st May 2016.

The OpenNMS Group, Inc and the OpenNMS Foundation Europe e.V. support volunteers who have no financial or commercial background with a travel bursary for the conference. What you create should be under a free license and publicly available. If you want to apply for the DevJam 2016 Travel Bursary please fill the following form:

Fields marked with an * are required

Give a short title for your DevJam 2016 project?

Give us a description about how you would like spent the time in the week with OpenNMS contributors and developers. Do you want to start, learn or contribute? You can be creative in how you want to help in the OpenNMS project. Your results should be public available and should be under a free license.

Submit your proposal to the board members of the OpenNMS Foundation Europe e.V.

Hope see you soon at DevJam 2016 in Twin Cities

by Ronny Trommer at April 26, 2016 06:07 PM

April 22, 2016

Adventures in Open Source

Welcome Ecuador (Country 29)

It is with mixed emotions that I get to announce that we now have a customer in Ecuador, our 29th country.

My emotions are mixed as my excitement at having a new customer in a new country is offset by the tragedy that country suffered recently. Everyone at OpenNMS is sending out our best thoughts and we hope things settle down (quite literally) soon.

They join the following countries:

Australia, Canada, Chile, China, Costa Rica, Denmark, Egypt, Finland, France, Germany, Honduras, India, Ireland, Israel, Italy, Japan, Malta, Mexico, The Netherlands, Portugal, Singapore, Spain, Sweden, Switzerland, Trinidad, the UAE, the UK and the US.

by Tarus at April 22, 2016 03:49 PM

April 19, 2016

Adventures in Open Source

Agent Provocateur

I’ve been involved with the monitoring of computer networks for a long time, two decades actually, and I’m seeing an alarming trend. Every new monitoring application seems to be insisting on software agents. Basically, in order to get any value out of the application, you have to go out and install additional software on each server in your network.

Now there was a time when this was necessary. BMC Software made a lot of money with its PATROL series of agents, yet people hated them then as much as they hate agents now. Why? Well, first there was the cost, both in terms of licensing and in continuing to maintain them (upgrades, etc.). Next there was the fact that you had to add software to already overloaded systems. I can remember the first time the company I worked for back then deployed a PATROL agent on an Oracle database. When it was started up it took the database down as it slammed the system with requests. Which leads me to the final point, outside of security issues that arise with an increase in the number of applications running on a system, the moment the system experiences a problem the blame will fall on the agent.

Despite that, agents still seem to proliferate. In part I think it is political. Downloading and installing agents looks like useful work. “Hey, I’m busy monitoring the network with these here agents”. Also in part, it is laziness. I have never met a programmer who liked working on someone else’s code, so why not come up with a proprietary protocol and write agents to implement it?

But what bothers me the most is that it is so unnecessary. The information you need for monitoring, with the possible exception of Windows, is already there. Modern operating systems (again, with the exception of Windows) ship with an SNMP agent, usually based on Net-SNMP. This is a secure, powerful extensible agent that has been tried and tested for many years, and it is maintained directly on server itself. You can use SNMPv3 for secure communications, and the “extend” and “pass” directives to make it easy to customize.

Heck, even Windows ships with an extensible SNMP agent, and you can also access data via WMI and PowerShell.

But what about applications? Don’t you need an agent for that?

Not really. Modern applications tend to have an API, usually based on ReST, that can be queried by a management station for important information. Java applications support JMX, databases support ODBC, and when all that fails you can usually use good ol’ HTTP to query the application directly. And the best part is that the application itself can be written to guard against a monitoring query causing undue load on the system.

At OpenNMS we work with a lot of large customers, and they are loathe to install new software on all of their servers. Plus, many of our customers have devices that can’t support additional agents, such as routers and switches, and IoT devices such as thermostats and door locks. This is the main reason why the OpenNMS monitoring platform is, by design, agentless.

A critic might point out that OpenNMS does have an agent in the remote poller, as well as in the upcoming Minion feature set. True, but those act as “user agents”, giving OpenNMS a view into networks as if it was a user of those networks. The software is not installed on every server but instead it just needs the same access as a user would have. So, it can be installed on an existing system or on a small system purchased for that purpose, at a minimum just one for each network to be monitored.

While some new IT fields may require agents, most successful solutions try to avoid them. Even in newer fields such as IT automation, the best solutions are agentless. They are not necessary, and I strongly suggest that anyone who is asked to install an agent for monitoring question that requirement.

by Tarus at April 19, 2016 03:31 PM

March 30, 2016

Adventures in Open Source

OpenNMS is Sweet Sixteen

It was sixteen years ago today that the first code for OpenNMS was published on Sourceforge. While the project was started in the summer of 1999, no one seems to remember the exact date, so we use March 30th to mark the birthday of the OpenNMS project.

OpenNMS Project Details

While I’ve been closely associated with OpenNMS for a very long time, I didn’t start it. It was started by Steve Giles, Luke Rindfuss and Brian Weaver. They were soon joined by Shane O’Donnell, and while none of them are associated with the project today, they are the reason it exists.

Their company was called Oculan, and I joined them in 2001. They built management appliances marketed as “purple boxes” based on OpenNMS and I was brought on to build a business around just the OpenNMS piece of the solution.

As far as I know, this is the only surviving picture of most of the original team, taken at the OpenNMS 1.0 Release party:

OpenNMS 1.0 Release Team

In 2002 Oculan decided to close source all future work on their product, thus ending their involvement with OpenNMS. I saw the potential, so I talked with Steve Giles and soon left the company to become the OpenNMS project maintainer. When it comes to writing code I am very poorly suited to the job, but my one true talent is getting great people to work with me, and judging by the quality of people involved in OpenNMS, it is almost a superpower.

I worked out of my house and helped maintain the community mainly through the #opennms IRC channel on freenode, and surprisingly the project managed not only to survive, but to grow. When I found out that Steve Giles was leaving Oculan, I applied to be their new CEO, which I’ve been told was the source of a lot of humor among the executives. The man they hired had a track record of snuffing out all potential from a number of startups, but he had the proper credentials that VCs seem to like so he got the job. I have to admit to a bit of schadenfreude when Oculan closed its doors in 2004.

But on a good note, if you look at the two guys in the above picture right next to the cake, Seth Leger and Ben Reed, they still work for OpenNMS today. We’re still here. In fact we have the greatest team I’ve every worked with in my life, and the OpenNMS project has grown tremendously in the last 18 months. This July we’ll have our eleventh (!) annual developers conference, Dev-Jam, which will bring together people dedicated to OpenNMS, both old and new, for a week of hacking and camaraderie.

Our goal is nothing short of making OpenNMS the de facto management platform of choice for everyone, and while we still have a long way to go, we keep getting closer. My heartfelt thanks go out to everyone who made OpenNMS possible, and I look forward to writing many more of these notes in the future.

by Tarus at March 30, 2016 03:15 PM

March 14, 2016

Adventures in Open Source

OpenNMS Horizon 17.1.1 Released

Probably the last Horizon 17 version, 17.1.1, has been released. According to TWiO, the next release will be Horizon 18 at the end of the month, with Horizon 19 following at the end of May.

This release is mainly a maintenance release. It does contain one fix I used (NMS-8199), which allows for the state names in the Jira Trouble Ticketing plugin to be configured. This helps a lot if Jira is not in English.

If you are running Horizon 17, this should help it run a bit smoother.

Bug

  • [NMS-7936] – Chart Servlet Outages model exception
  • [NMS-8010] – Groups config rolled back after deleting a user in web UI
  • [NMS-8034] – Adding com.sun.management.jmxremote.authenticate=true on opennms.conf is ignored by the opennms script
  • [NMS-8048] – org.hibernate.exception.SQLGrammarException with ACLs on V17
  • [NMS-8075] – vacuumd-configuration.xml — Database error executing statement
  • [NMS-8113] – Overview about major releases in the release notes
  • [NMS-8153] – Can't modify the Foreign ID on the Requisitions UI when adding a new node
  • [NMS-8159] – When altering the SNMP Trap NBI config, the externally referenced mapping groups are persisted into the main file.
  • [NMS-8161] – Tooltips are not working on the new Requisitions UI
  • [NMS-8165] – OutageDao ACL support is broken causing web UI failures
  • [NMS-8177] – Install guide should use postgres admin for schema updates
  • [NMS-8199] – Allows state names to be configured in the JIRA Ticketer Plugin

Enhancement

  • [NMS-6404] – Allow send events through ReST
  • [NMS-8148] – Create pull request and contribution template to GitHub project

Task

  • [NMS-8151] – Remove all jersey artifacts from lib classpath

by Tarus at March 14, 2016 02:49 PM

March 10, 2016

OpenNMS Foundation Europe

[Release] – OpenNMS 17.1.1

We welcome our new release of OpenNMS Horizon 17.1.1 with code name Glenmorangie, named after a whiskey distillery in Tain, Scotland. This is a bug fix release. The most noteworthy bug fixed is regarding ACL support which was broken and caused Web UI failures.

This release adds a little but useful enhancement. Instead of connecting to TCP 5817 and using send-event.pl it is now possible to send events to OpenNMS through ReST. Otherwise GitHub introduced Pull Request Templates a few weeks ago which have added to improve our pull request workflow. If you think it is more annoying then helpful, please don’t hesitate and give us feedback.

You can find all the details in our Release Notes and wish you happy updating.

by Ronny Trommer at March 10, 2016 11:41 PM

March 04, 2016

Adventures in Open Source

Speeding Up OpenNMS Requisition Imports

One thing that differentiates OpenNMS from other applications is the strong focus on tools for provisioning the system. If you want to monitor hundred of thousands of devices, to ultimately millions, the ordinary methods just don’t work.

Users of OpenNMS often create large requisitions from external database sources, and sometimes it can take awhile for the import to complete. Delays can happen if the Foreign Source used for the requisition has a large number of service detectors that won’t exist on most devices.

For example, the default Foreign Source for Horizon 17 has about 15 detectors. Of those, only about 4 will exist on networking equipment (ICMP, SSH, HTTP and HTTPS). When scanning, this can add a lot of time per interface. Assuming 2 retries and a 3 second timeout, that would be 9 seconds for each non-existent service. With just 1000 interfaces, that’s 99000 seconds (9 seconds x 11 services x 1000 interfaces) of time just spent waiting, which translates to 27.5 hours.

Now, granted, the importer has multiple threads so the actual wait time will be less, but you can see how this can impact the time needed to import a requisition. This can be reduced significantly by tuning service detection to the bare minimum needed and perhaps adding other services later on a per device basis without scanning.

by Tarus at March 04, 2016 07:32 AM

February 25, 2016

Adventures in Open Source

Review: System 76 Wild Dog Pro

Recently, I was trying to work on the desktop at my office and it kept rebooting. Nothing in the logs nor anything to indicate that there might be a software issue, so I just assumed that my now 5 year old machine was probably at its end of life.

Without hesitation I decided to order a new desktop from System 76. I really liked the Sables we bought from them last year, so I figured it would be simple to order a Linux-compatible machine from them.

I went to their “Desktops” page and without much thought decided on the “Wild Dog Pro“. I don’t have huge requirements, so the big monster with wheels the “Silverback” (probably named after the gorilla) was right out. I picked more of a middle of the road machine with the following specs:

  • Case: Black brushed aluminum
  • CPU: 4.2 GHz i7-6700K (4.0 up to 4.2 GHz – 8MB Cache – 4 Cores – 8 Threads) + Liquid Cooling
  • Memory: 32 GB Dual-channel DDR4 at 2133 MHz (4× 8 GB)
  • Graphics: 2 GB GTX 960 Superclocked with 1024 CUDA Cores
  • Storage: 1 TB 2.5″ Solid State Drive
  • Dual Layer CD-RW / DVD-RW
  • WiFi up to 867 Mbps + Bluetooth
  • 3 Year Limited Parts and Labor Warranty

I also ordered two day shipping, since I thought I would need it fast.

I got an order confirmation almost immediately with an estimate of 2 to 6 days to ship. Soon after that I got a note stating that the Wild Dog was running toward the latter end of that range. I figured I could just use my laptop until the new machine arrived if necessary, and I waited.

While I was waiting, I still continued to use my old desktop. I noticed the rebooting issue happened toward the end of the day. It finally dawned on me (I’m a little thick) that it might be heat related. I crawled under the desk to find that the power supply fan wasn’t working. I ordered a new one of those to see if it would help.

Since the new power supply arrived before the Wild Dog shipped, and it fixed my issue, I contacted System 76 to see if I could change the shipping from “speedy” to something more like “camel”. They were happy to do it and refunded the difference in price.

Anyway, the new machine finally arrived (I ordered this on 29 January and got it on 16 February – a little slow but faster than Lenovo and Dell have been for me in the past). It showed up in a standard brown box:

Wild Dog Pro Box

The unit was minimally packaged inside (which I like):

Wild Dog Pro Box Open

Pretty case with a minimalist look:

Wild Dog Pro Front

with all of the “business” being on the back:

Wild Dog Pro Back

This has USB 2.0, USB 3.0 and USB 3.1 ports, including a USB-C connector should you be into such things.

I like the case, but they tape a letter on the top that, when removed, you can still see the marks left by the tape. I haven’t hit this with goo cleaner since it is going under my desk, but it did detract from the overall look of the unit. The letter contained a welcome note and some stickers, as well as a little cut-out dude called the “Desktop Sentinel” and named “M3lvin”. Not quite sure what that is, and a quick Google search turned up nothing.

Wild Dog Pro Letter

Of course, the first thing I did was open it up. The case is nice, although I’ve grown used to captive screws to remove side panels and was surprised when the two I took off came completely off. The system is well laid out inside with room for expansion (I wanted to put in a backup SATA drive to go with the SSD).

Wild Dog Pro Right

Wild Dog Pro Left

I can’t tell you much about the performance. It seems plenty fast, and I downloaded the test suites from Phoronix but just didn’t have the hours to run them for benchmarks. While it ships with Ubuntu 15.10, I’m a Linuxmint guy so I immediately went to install Mint on the machine.

This was harder than I thought it would be. I could not get the BIOS to boot off of the USB stick no matter what I tried (it saw the stick in the boot menu but wouldn’t boot to it). I ended up burning the image to DVD and, while slower, worked fine.

Then it dawned on me that they probably shipped with Ubuntu 15.10 because it has one of those fancy “Skylake” processors which benefits from later kernels. Luckily I had run into this with my Dell laptop, so I installed gcc-4.9 and the 4.4 kernel and everything worked but the wireless card. Turns out you need to install the latest ndiswrapper and you’ll be good to go.

Needless to say I’m eager for Mint 18 to come out with support for the later kernels.

Overall, I’m happy with my purchase. There is room for improvement on the speed of producing it and shipping it out, but my decision to use Mint was totally on me. I look forward to getting many hours of use out of this machine.

by Tarus at February 25, 2016 09:13 PM

February 24, 2016

Adventures in Open Source

OpenNMS Horizon 17.1 Released

As some of you may have noticed, Horizon 17.1 has been released. As Horizon 17 will form the basis for Meridian 2016, I’m extremely happy to see how much progress has been made on fixing issues.

Be sure to check out the Release Notes.

Horizon is our rapid release version, and its goal is to get all the cool new features out as soon as they are ready. In this case, right about release we discovered an annoying but easy to fix bug with provisioning, so if you plan to run Horizon 17.1 you should also apply this patch.

Have fun, and we hope you find OpenNMS useful.

Bug

  • [NMS-4108] – Bad suggestions in install guide
  • [NMS-7152] – Enlinkd Topology Plugin fails to create LLDP links for mismatched link port descriptions.
  • [NMS-7820] – snmp-graph.properties.d files with –vertical-label="verticalLabel" in config
  • [NMS-7866] – Incorrect host in Location header when creating resources via ReST
  • [NMS-7910] – config-tester is broken
  • [NMS-7953] – Opsboard and Opspanel use wrong logo
  • [NMS-7966] – Unable to generate eventconf if a MIB (improperly?) uses a TC to define a TC
  • [NMS-7988] – container features.xml still references jersey 1.18 when it should reference 1.19.
  • [NMS-8000] – Topology-UI shows CDP links not correct
  • [NMS-8014] – Backshift graphs show dates in UTC instead of the browser's timezone
  • [NMS-8017] – StrafePing: Unexpected exception while polling PollableService
  • [NMS-8023] – Grafana Box did not work anymore
  • [NMS-8026] – Constant Thread Locking on Enlinkd
  • [NMS-8029] – Wrong use of opennms.web.base-url
  • [NMS-8038] – NRTG with newts – get StringIndexOutOfBoundsException
  • [NMS-8051] – The newts script only works if cassandra runs on localhost
  • [NMS-8054] – AlarmPersisterIT test is empty
  • [NMS-8064] – The 'newts init' script does work when authentication is enabled in Cassandra
  • [NMS-8065] – ReST Regression in Alarms/Events
  • [NMS-8066] – Newts only uses a single thread when writing to Cassandra
  • [NMS-8073] – User Restriction Filters: mapping class for roles to groups does not work
  • [NMS-8074] – The "Remove From Focus" button intermittently fails
  • [NMS-8079] – The OnmsDaoContainer does not update its cache correctly, leading to a NumberFormatException
  • [NMS-8084] – File not found exception for interfaceSTP-box.jsp on SNMP interface page
  • [NMS-8097] – Installation Guide Debian Bug Version 17.0.0
  • [NMS-8100] – Unable to complete creation of scheduled reports
  • [NMS-8103] – NPE when persisting data with Newts
  • [NMS-8104] – init script checkXmlFiles() fails to pick up errors
  • [NMS-8106] – INFO-severity syslog-derived events end up unmatched
  • [NMS-8109] – Memory leak when using the BSFDetector
  • [NMS-8112] – init script "configtest" exit value is always 1
  • [NMS-8116] – Heat map Alarms/Categories do not show all categories
  • [NMS-8119] – WS-MAN has broken ForeignSourceConfigRestService and the requisitions UI doesn't work.
  • [NMS-8123] – Removing ops boards via the configuration UI does not update the table
  • [NMS-8126] – JNA ping code reuses buffer causing inconsistent reads of packet contents
  • [NMS-8133] – Synchronizing a requisition fails
  • [NMS-8147] – Add all the services declared on Collectd and Pollerd configuration as available services on /opennms/rest/foreignSourceConfig/services

Enhancement

  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7978] – Add threshold comments and whitespace changes to match how the OpenNMS web GUI generates XML files
  • [NMS-8005] – Add support for using NRTG via Ajax calls
  • [NMS-8024] – Add support for OSGi-based Ticketing plugins
  • [NMS-8028] – Add event definition for postfix syslog message TLS disabled
  • [NMS-8030] – Improve the SNMP data collection config parsing to give more flexibility to the users
  • [NMS-8042] – set Up severities for RADLAN-MIB.events.xml
  • [NMS-8068] – Add support for marshalling NorthboundAlarms to XML
  • [NMS-8071] – Event definition file for JUNIPER-IVE-MIB
  • [NMS-8120] – Fixed a paragraph in the "Automatic Discovery" provisioning chapter
  • [NMS-8156] – Upgrade Angular Backend for the Requisitions UI

by Tarus at February 24, 2016 04:58 PM

February 22, 2016

OpenNMS Foundation Europe

[Release] – Ubuntu 14.04.4 LTS / CentOS 7.2.1511 and Horizon 17.1.0 Vagrant Box Update

We have updated our Vagrant box hosted on the Atlas platform with latest OpenNMS Horizon 17 pre-configured with RRDtool. This is also the first VirtualBox image which comes with a pre-installed Grafana 2.6 with the Grafana OpenNMS Plugin data source.

Ubuntu 14 LTS based image

vagrant init opennms/vagrant-opennms-ubuntu-stable
vagrant up --provider

CentOS 7.2 based image

vagrant init opennms/vagrant-opennms-centos-stable
vagrant up --provider

If you run the default Vagrant box it uses a NAT interface. To have access to the running application from your box just add the following lines in your Vagrantfile:

config.vm.network "forwarded_port", guest: 8980, host: 8980
config.vm.network "forwarded_port", guest: 3000, host: 3000

You want to build the box for a different provider than VirtualBox with packer just fork or contribute to the opennms-packer repository.

gl & hf

by Ronny Trommer at February 22, 2016 01:37 PM

February 20, 2016

Adventures in Open Source

Review: Angel Sensor Fitness Tracker

Angel Sensor represents everything that’s wrong with the technology industry today.

TL;DR; Two years ago, Angel Sensor ran an Indiegogo campaign to create an “open sensor for health and fitness”. They implied that the software would be open source. I finally got mine this week and it is total bollocks. Not only is the software not open source, the app that goes with it is barely an app. There is little communication from the vendor to the community, and while the hardware is solid, it is too expensive to manufacture so the “classic” model is obsolete on delivery. Don’t deal with this company.

Okay, so I like metrics. I work on an open source project to monitor anything you reach over the network. I have a weather station at my house and a temperature sensor in my workshop. I am very eager to gather information about what’s going on in my body, and while companies like Fitbit make great products for that purpose, I distrust sending this most personal data to a third party.

So a couple of years ago I did a search on “open source fitness tracking” and came across Angel Sensor. This company claimed that they were going to create an open health platform where the software would be open source, so I bought an Angel Sensor wristband and eagerly awaited its arrival.

And waited. And waited.

Two years later, it finally arrived and it is a total disappointment.

First, the good.

The packaging is nice.

Angel Sensor Box

The band itself is in its own compartment, and on the side of the box is a little drawer that you can pull out containing the accessories:

Angel Sensor What's In the Box

You get the band, a small instruction booklet, a charging cradle and seven flexible clasps (of various lengths) that help hold the band to your wrist.

I picked out a clasp and pretty soon had it on my wrist:

Angel Sensor On Wrist

Although heavier than I would have expected, it felt comfortable, something I could wear 24/7. My LG Urbane watch is slightly thicker but overall a bit lighter:

Angel Sensor vs. LG Urbane

The instruction booklet says to charge the band fully before using, and this is where the problems started. The “classic” uses a charging cradle. At both ends of the band (where the clasp connects) are metal studs. You insert one set of studs (marked on the band) into the cradle to charge it. The problem is that there is nothing in the charger to really grab on to the studs, so in my case it kept losing the connection and charging would stop. I had to prop it up at an angle in order to keep a connection, and even then I wasn’t sure about it staying in place.

Which is a shame, since the band itself is rather stylish. While there is no screen, there are two white LEDs on each side of the band that can glow and pulse to let you know something is going on. I kept having to keep an eye on the LEDs to make sure the thing was still charging.

Note that all this is moot since the classic proved too hard to produce. The new unit is called the M1. The M1 is thicker and you lose water resistance, which I think is an important feature. While I don’t plan to dive with a fitness tracker, I might wash dishes while wearing it, so the ability to be submerged in liquid for a small amount of time is a requirement. The M1 does use a standard microUSB charging connector so that is a plus in its favor.

Summary to this point: solid hardware design, although now obsolete, with a major flaw in the charger.

My real disappointment set in with the software.

I knew something was wrong when they announced on their blog that the first app released would be iOS only. Now I don’t have a problem with people leading with the iOS version, it is a huge market, but when your market differentiation is based on being “open” one would assume that an Android version would be first to encourage more contribution. Alas, the Android version seems to be more of an afterthought. You want to see it? Here it is:

Angel Sensor Android App Screen

Yup. You’re looking at it. No menu, no explanation, just four values.

To get to this point, I downloaded the app from Google Play, launched it, and then paired it with the band. You do this by tapping on the band’s button once, which will cause it to vibrate. The sensor will then show up on the app’s screen and you can connect to it. Note that you have to do this every time you launch the app, or at least I did. The sensor will be identified by a number and a MAC address.

On the main screen you get what I assume to be heart rate, body temperature, number of steps and some unknown value represented in units of “g”. No history, no way to, say, gather and export collected data, no way to even change the temperature units from Celsius to Fahrenheit. The title bar does show connection strength and battery life, but the only other thing is a tab for firmware updates. From what I can tell, it doesn’t actually check anywhere for firmware updates, but it would give you the ability to install one should it be released.

Of course, the app doesn’t tell you the current firmware version, and there doesn’t seem to be a download section anywhere on the Angel website (the support section is just a duplicate of the FAQ section).

Oh, and if you turn it sideways, you get some graphs.

Angel Sensor Android App graphs

No explanation of what is actually being graphed, but it does wiggle around a bit. I did find a Youtube video that suggests the top two graphs are heart related and the bottom one is motion, and the IOS app shown in the video has more features, but since I don’t have a modern iPhone I couldn’t check it out.

Since I couldn’t believe this was it, I kept searching for software related to the Angel Sensor. I did manage to find some code on Github (which appears to be an SDK for accessing the API and perhaps the sad Android app). Of course, the License file is incomplete and doesn’t really say under what license the software is published. Once again people, access to source code doesn’t make it open source.

And this is the biggest flaw with Angel Sensor and one reason I have such an issue with the current technology environment. You have people like Paul Graham crowing about creating income inequality and that has resulted in a new start up life cycle. You come up with an idea (that hasn’t changed) but now you wrap it a bunch of buzzwords, like “open”, “IoT” and “mobile”, even if you don’t really understand what they mean.

Next, you raise some money through crowdfunding, often underestimating the amount needed so your “funding” can be a success. Now, remember, your audience isn’t the poor saps who decided to give you money – you want to show viability to VCs who can deliver the money you’ll actually need. You use the initial cash to build just enough product to either get a lot of investment or get acquired by someone wanting to cut a year or so off developing their own product. This is considered “success” in some circles.

(sigh)

This whole thing is a shame since there is a market for a truly open mobile health platform. I’m not insisting that the hardware be open (I’m not going to build my own silicon molds) but all the software, including the firmware, should be available under an OSI-approved license. Then you need to focus on building a community, and that really requires good communication.

Angel Sensor sucks at communication.

In addition to the total lack of documentation, even their blog fails miserably. In the last half of 2015 there were a total of three posts, the last one from 3 November. I used to be able to get a rapid response from a person named Io Salant, but when I wrote to them a few months ago asking for a status I got:

Io is on a well-deserved leave. Your email has been forwarded to hello@angelsensor.com
The team is quite busy, but will make every attempt to get to your email as quickly as possible.

Of course, never heard back. So my now “classic” Angel Sensor is destined for eBay.

In summary, this is another case of a crowdfunded effort done by people in over their heads with no real desire to create something that lasts but to make money at the expense of their customers. I wouldn’t trust Angel Sensor to feed my dog, much less monitor my health, and I can only hope someone with actual ability will come out with a truly open personal medical platform.

by Tarus at February 20, 2016 03:27 PM

February 18, 2016

OpenNMS Foundation Europe

[Release] – OpenNMS 17.1.0

We welcome our new release of OpenNMS Horizon 17.1.0 with code name Talisker, a whiskey distillery in Carbost, Scotland – the only distillery on the Isle of Skye.

We added support for the Web Services-Management protocol and added a few northbound interfaces. You can find all the details in our Release Notes and wish you happy updating.

by Ronny Trommer at February 18, 2016 09:25 PM

February 01, 2016

Adventures in Open Source

Add a Weather Widget to OpenNMS Home Screen

I was recently at a client site where I met a man named Jeremy Ford. He’s sharp as a knife and even though, at the time, he was new to OpenNMS, he had already hacked a few neat things into the system (open source FTW).

Weathermap on OpenNMS Home Page

One of those was the addition of a weathermap to the OpenNMS home page. He has graciously put the code up on Github.

The code is a script that will generate a JSP file in the OpenNMS “includes” directory. All you have to do then is to add a reference to it in the main index.jsp file.

For those of you who don’t know or who have never poked around, under the $OPENNMS_HOME directory should be a directory called jetty-webapps. That is the web root directory for the Jetty servlet container that ships with OpenNMS.

Under that directory you’ll find a subdirectory for opennms. When you surf to http://[my OpenNMS Server]:8980/opennms that is the directory you are visiting. In it is an index.jsp file that serves as the main page.

If you are familiar with HTML, the JSP file is very similar. It can contain references to Java code, but a lot of it is straight HTML. The file is kept simple on purpose, with each of the three columns on the main page indicated by comments. The part you will need to change is the third column:

<!-- Right Column -->
        <div class="col-md-3" id="index-contentright">
                <!-- weather box -->
                <jsp:include page="/includes/weather.jsp" flush="false" />

Feel free to look around. If you ever wanted to rearrange the OpenNMS Home page, this is a good place to start.

Now, I used to like poking around with these files since they would update automatically, but later versions of OpenNMS (which contain later versions of Jetty) seem to require a restart. If you get an error, restart OpenNMS and see if it goes away.

Now the weather.jsp file gets generated by Jeremy’s python script. In order to get that to work you’ll need to do two things. The most important is to get an API key from Weather Underground. It is a pretty easy process, but be aware that you can only do 500 queries a day without paying. The second thing you’ll need to do is edit the three URLs in the script and change the location. It is currently set to “CA/San_Francisco” but I was able to change it to “NC/Pittsboro” and it “just worked”.

Finally, you’ll need to set the script up to run via cron. I’m not sure how frequently Weather Underground updates the data, but a 10 minute interval seems to work well. That’s only 144 queries a day, so you could easily double it and still be within your limit.

[IMPORTANT UPDATE: Jeremy pointed out that the script actually does three queries, not just one, so instead of doing 144 queries a day, it’s 432. Still leaves some room with 10 minute queries but you don’t want to increase the frequency too much.]

Thanks to Jeremy for taking the time to share this. Remember, once you get it working, if you upgrade OpenNMS you’ll need to edit index.jsp and add it back, but that should be the only change needed.

by Tarus at February 01, 2016 09:29 PM

January 28, 2016

Adventures in Open Source

Dev-Jam 2016 Dates Announced

Yay! We have settled on dates for the eleventh (!) OpenNMS Dev-Jam Conference.

Dev-Jam 2015 Group Picture

Once again we will descend on the campus of the University of Minnesota for a week of fun, fellowship and hacking on OpenNMS and all things open source.

Anyone is welcome to attend, although I must stress that this is aimed at developers and it is highly unstructured. Despite that, we get a ton of things done and have a lot of fun doing it (and I’m not just saying that, there’s videos).

We stay at Yudof Hall on campus, and while that can scare older folks I want to point out the accommodation is quite nice and I’ve been told they they have recently refurbished the dorm. If you want to stay on campus the cost is US$1500 for the week which includes all meals.

If you prefer hotels, there are several nearby, and you can come to the conference for US$800.

Registration is now open and space is limited. If you think you want to come but aren’t sure, let me know and I’ll try to save you a space. We’ve sold out the last two years.

Oh, sponsorships are available as well for $2500. You will help us bring someone deserving to Dev Jam who wouldn’t ordinarily get to attend, and you’ll get your logo and link on www.opennms.org for a year.

Dev Jam!

by Tarus at January 28, 2016 09:55 PM

January 25, 2016

Adventures in Open Source

OmniROM 6.0

For the last few days it has been hard to remain true to my free and open source roots. I guess I’ve been spoiled lately with almost everything I try out “just working”, but it wasn’t so with my upgrade to OmniROM 6.0 on my Nexus 6 (shamu).

I’ve been a big fan of OmniROM since it came out, and I base my phone purchases on what handsets are officially supported. While I tend not to rush to upgrade to the latest and greatest, once the official nightlies switched to Android “Marshmallow” I decided to make the jump.

Now there are a couple of tools that I can’t live without when playing with my phone. They are the Team Win Recovery Project (TWRP) and Titanium Backup. The first lets you create easy to restore complete backups and the latter allows you to restore application status even if you factory reset your device, which I had to do.

[NOTE: I should also mention that I rely on Chainfire’s SuperSU for root. It took me awhile to find a link for it I trust.]

When I tried the first 6.0 nightlies, all I did was sideload the ROM, wipe the caches, and reboot. I liked the new “OMNI” splash screen but once the phone booted, the error “Unfortunately process com.android.phone has stopped” popped up and couldn’t be cleared. Some investigation suggested a factory reset would fix the issue, but since I didn’t want to go through the hassle of restoring all of my applications I decided to just restore OmniROM 5.1 and wait to see if a later build would fix it.

Well, this weekend we got a dose of winter weather and I ended up home bound for several days, so I decided to give it another shot. I sideloaded the latest 6.0 nightly and sure enough, the same error occurred. So I did a factory reset and, voilà, the problem went away.

Now all I had to do was reload all 100+ apps. (sigh)

I started by installing the “pico” GApps package from Open GApps and in case you were wondering, the Nexus 6 uses a 32-bit ARM processor.

I guess I really shouldn’t complain, as doing a fresh install once in awhile can clean out a bunch of kruft that I’ve installed over the past year or so, but I’ve come expect OmniROM upgrades to be pretty easy.

One of the first things I installed from the Play store was the “K-9 Mail” application. Unfortunately, it kept having problems connecting to my personal mail server (the work one was fine). The sync would error with “SocketTimeoutException: fai”. So I rebooted back to Omni 5.1 and things seemed to work okay (although I did see that error when trying to sync some of the folders). Back I went to 6.0 (see where TWRP would come in handy here?) and I noticed that when I disabled Wi-Fi, it worked fine.

As I was trying to sleep last night it hit me – I bet it has something to do with IPv6. We use true IPv6 at the office, but not to our external corporate mail server, which would explain why a server in the office would fail but the other one work. At home I’m on Centurylink DSL and they don’t offer it (well, they offer 6rd which is IPv6 encapsulated over IPv4 but not only is it not “true” IPv6 you have to pay extra for a static IP to get it to work). I use a Hurricane Electric tunnel and apparently Marshmallow utilizes a different IPv6 stack and thus has issues trying to retrieve data from my mail server when using that protocol.

(sigh)

I tried turning off IPv6 on Android. It’s not easy and I couldn’t get any of the suggestions to work. Then I found a post that suggested it was the MTU, so I reduced the MTU to 1280 and still no love.

So I turned off the HE tunnel. Bam! K-9 started working fine.

For now I’ve just decided to leave IPv6 off. While I think we need to migrate there sooner rather than later, there is nothing I absolutely have to have IPv6 for at the moment and I think as bandwidth increases, having to tunnel will start to cause performance issues. Normal traffic, such as using rsync, seems to be faster without IPv6.

That experience cost me about two days, but at the moment I’m running the latest OmniROM and I’m pretty happy with it. The one open issue I have is that the AOSP keyboard crashes if you try to swipe (gesture type) but I just installed the Google Keyboard and now it works without issue.

I have to say that there were some moments when I was very close to installing the Google factory image back on my Nexus 6. It’s funny, but the ability to shake the phone to dismiss an alarm is kind of a critical app with me. Since the last time I checked it wasn’t an available option on the Google ROM, I was willing to stick it out a little longer and figure out my issues with OmniROM.

Heh, freedom.

by Tarus at January 25, 2016 10:18 PM

January 22, 2016

Adventures in Open Source

OpenNMS at Scale

So, yes, the gang from OpenNMS will be at the SCaLE conference this weekend (I will not be there, unfortunately, due to a self-imposed conference hiatus this year). It should be a great time, and we are happy to be a Gold Sponsor.

But this post is not about that. This is about how Horizon 17 and data collection can scale. You can come by the booth at SCaLE and learn more about it, but here is the overview.

When OpenNMS first started, we leveraged the great application RRDTool for storing performance data. When we discovered a java port called JRobin, OpenNMS was modified to support that storage strategy as well.

Using a Round Robin database has a number of advantages. First, it’s compact. Once the file containing the RRD database is created, it never grows. Second, we used RRDTool to also graph the data.

However, there were problems. Many users had a need to store the raw collected data. RRDTool uses consolidation functions to store a time-series average. But the biggest issue was that writing lots of files required really fast hard drives. The more data you wanted to store, the greater your investment in disk arrays. Ultimately, you would hit a wall, which would require you to either reduce your data collection or partition out the data across multiple systems.

No more. With Horizon 17 OpenNMS fully supports a time-series database called Newts. Newts is built on Cassandra, and even a small Cassandra cluster can handle tens of thousands of inserts a second. Need more performance? Just add more nodes. Works across geographically distributed systems as well, so you get built-in high availability (something that was very difficult with RRDTool).

Just before Christmas I got to visit a customer on the Eastern Shore of Maryland. You wouldn’t think that location would be a hotbed of technical excellence, but it is rare that I get to work with such a quick team.

They brought me up for a “Getting to Know You” project. This is a two day engagement where we get to kick the tires on OpenNMS to see if it is a good fit. They had been using Zenoss Core (the free version) and they hit a wall. The features they wanted were all in the “enterprise” paid version and the free version just wouldn’t meet their needs. OpenNMS did, and being truly open source it fit their philosophy (and budget) much better.

This was a fun trip for me because they had already done most of the work. They had OpenNMS installed and monitoring their network, and they just needed me to help out on some interesting use cases.

One of their issues was the need to store a lot of performance data, and since I was eager to play with the Newts integration we decided to test it out.

In order to enable Newts, first you need a Cassandra cluster. It turns out that ScyllaDB works as well (more on that a bit later). If you are looking at the Newts website you can ignore the instructions on installing it as it it built directly into OpenNMS.

Another thing built in to OpenNMS is a new graphing library called Backshift. Since OpenNMS relied on RRDTool for graphing, a new data visualization tool was needed. Backshift leverages the RRDTool graphing syntax so your pre-defined graphs will work automatically. Note that some options, such as CANVAS colors, have not been implemented yet.

To switch to newts, in the opennms.properties file you’ll find a section:

###### Time Series Strategy ####
# Use this property to set the strategy used to persist and retrieve time series metrics:
# Supported values are:
#   rrd (default)
#   newts

org.opennms.timeseries.strategy=newts

Note: “rrd” strategy can refer to either JRobin or RRDTool, with JRobin as the default. This is set in rrd-configuration.properties.

The next section determines what will render the graphs.

###### Graphing #####
# Use this property to set the graph rendering engine type.  If set to 'auto', attempt
# to choose the appropriate backend depending on org.opennms.timeseries.strategy above.
# Supported values are:
#   auto (default)
#   png
#   placeholder
#   backshift
org.opennms.web.graphs.engine=auto

If you are using Newts, the “auto” setting will utilize Backshift but here is where you could set Backshift as the renderer even if you want to use an RRD strategy. You should try it out. It’s cool.

Finally, we come to the settings for Newts:

###### Newts #####
# Use these properties to configure persistence using Newts
# Note that Newts must be enabled using the 'org.opennms.timeseries.strategy' property
# for these to take effect.
#
org.opennms.newts.config.hostname=10.110.4.30,10.110.4.32
#org.opennms.newts.config.keyspace=newts

There are a lot of settings and most of those are described in the documentation, but in this case I wanted to demonstrate that you can point OpenNMS to multiple Cassandra instances. You can also set different keyspace names which allows multiple instances of OpenNMS to talk to the same Cassandra cluster and not share data.

From the “fine” documentation, they also recommend that you store the data based on the foreign source by setting this variable:

org.opennms.rrd.storeByForeignSource=true

I would recommend this if you are using provisiond and requisitions. If you are currently doing auto-discovery, then it may be better to reference it by nodeid, which is the default.

I want to point out two other values that will need to be increased from the defaults: org.opennms.newts.config.ring_buffer_size and org.opennms.newts.config.cache.max_entries. For this system they were both set to 1048576. The ring buffer is especially important since should it fill up, samples will be discarded.

So, how did it go? Well, after fixing a bug with the ring buffer, everything went well. That bug is one reason that features like this aren’t immediately included in Meridian. Luckily we were working with a client who was willing to let us investigate and correct the issue. By the time it hits Meridian 2016, it will be completely ready for production.

If you enable the OpenNMS-JVM service on your OpenNMS node, the system will automatically collected Newts performance data (assuming Newts is enabled). OpenNMS will also collect performance data from the Cassandra cluster including both general Cassandra metrics as well as Newts specific ones.

This system is connected to a two node Cassandra cluster and managing 3.8K inserts/sec.

Newts Samples Inserted

If I’m doing the math correctly, since we collect values once every 300 seconds (5 minutes) by default, that’s 1.15 million data points, and the system isn’t even working hard.

OpenNMS will also collect on ring buffer information, and I took a screen shot to demonstrate Backshift, which displays the data point as you mouse over it.

Newts Ring Buffer

Horizon 17 ships with a load testing program. For this cluster:

[root@nms stress]# java -jar target/newts-stress-jar-with-dependencies.jar INSERT -B 16 -n 32 -r 100 -m 1 -H cluster
-- Meters ----------------------------------------------------------------------
org.opennms.newts.stress.InsertDispatcher.samples
             count = 10512100
         mean rate = 51989.68 events/second
     1-minute rate = 51906.38 events/second
     5-minute rate = 38806.02 events/second
    15-minute rate = 31232.98 events/second

so there is plenty of room to grow. Need something faster? Just add more nodes. Or, you can switch to ScyllaDB which is a port of Cassandra written in C. When run against a four node ScyllaDB cluster the results were:

[root@nms stress]# java -jar target/newts-stress-jar-with-dependencies.jar INSERT -B 16 -n 32 -r 100 -m 1 -H cluster
-- Meters ----------------------------------------------------------------------
org.opennms.newts.stress.InsertDispatcher.samples
             count = 10512100
         mean rate = 89073.32 events/second
     1-minute rate = 88048.48 events/second
     5-minute rate = 85217.92 events/second
    15-minute rate = 84110.52 events/second

Unfortunately I do not have statistics for a four node Cassandra cluster to compare it directly with ScyllaDB.

Of course the Newts data directly fits in with the OpenNMS Grafana integration.

Grafana Inserts per Second

Which brings me to one down side of this storage strategy. It’s fast, which means it isn’t compact. On this system the disk space is growing at about 4GB/day, which would be 1.5TB/year.

Grafana Disk Space

If you consider that the data is replicated across Cassandra nodes, you would need that amount of space on each one. Since the availability of multi-Terabyte drives is pretty common, this shouldn’t be a problem, but be sure to ask yourself if all the data you are collecting is really necessary. Just because you can collect the data doesn’t mean you should.

OpenNMS is finally to the point where the storing of performance data is no longer an issue. You are more likely to hit limits with the collector, which in part is going to be driven by the speed of the network. I’ve been in large data centers with hundreds of thousands of interfaces all with sub-millisecond latency. On that network, OpenNMS could collect on hundreds of millions of data points. On a network with lots of remote equipment, however, timeouts and delays will impact how much data OpenNMS could collect.

But with a little creativity, even that goes away. Think about it – with a common, decentralized data storage system like Cassandra, you could have multiple OpenNMS instances all talking to the same data store. If you have them share a common database, you can use collectd filters to spread data collection out over any number of machines. While this would take planning, it is doable today.

What about tomorrow? Well, Horizon 18 will introduce the OpenNMS Minion code. Minions will allow OpenNMS to scale horizontally and can be managed directly from OpenNMS – no configuration tricks needed. This will truly position OpenNMS for the Internet of Things.

by Tarus at January 22, 2016 09:31 PM

January 20, 2016

Adventures in Open Source

Triggering OpenNMS Notifications Based on Event Parameters

I recently had a client ask how to notify on an event where they wanted to match on certain event parameters. I decided to put this on the wiki with the hope that people would find it useful.

by Tarus at January 20, 2016 09:03 PM

January 19, 2016

The OpenNMS Group

OpenNMS to Exhibit at SCaLE 14x

The OpenNMS Group is proud to be a Gold Sponsor of the 14th annual Southern California Linux Expo to be held 22-24 January in Pasadena, California.

In addition to having a booth in the expo hall, Ken Eshelby will be presenting a talk entitled “Internet of Thingies.

Also, join us at the “Network and Server Management” birds of a feather group! We will have a food, drinks, and good company!

by jessi at January 19, 2016 08:28 PM

Adventures in Open Source

Avoiding the Sad Graph of Software Death

Seth recently sent me to an interesting article by Gregory Brown discussing a “death spiral” often faced by software projects when issues and feature requests start to out pace the ability to close them.

Sad Graph of Death

Now Seth is pretty much in charge of managing our Jira instance, which is key to managing the progress of OpenNMS software development. He decided to look at our record:

OpenNMS Issues Graph

[UPDATE: Logged into Jira to get a lot more issues on the graph]

Not bad, not bad at all.

A lot of our ability to keep up with issues comes from our project’s investment in using the tool. It is very easy to let things slide, resulting the the first graph above and causing a project to possibly declare “issue bankruptcy“. Since all of this information is public for OpenNMS, it is important to keep it up to date and while we never have enough time for all the things we need to do, we make time for this.

I think it speaks volumes for Seth and the rest team that OpenNMS issues are managed so well. In part it comes naturally from “the open source way” since projects should be as transparent as possible, and managing issues is a key part of that.

by Tarus at January 19, 2016 04:59 PM

January 15, 2016

Adventures in Open Source

The Inverter: Episode 58 – Nappy Hue Year

It’s a new year, and that means a new Bad Voltage.

Let’s hope the Intro is not an indication of things to come. Worst … intro … ever. Seriously, just jump to the 3 minute mark. You’ll be glad you did.

Okay, brand new year and that means predictions, where I predict that Jeremy will once again win. Yes, his entries aren’t all that strong, but he always wins.

The way the game works is that each member of the BV team must make two predictions, with bonus predictions available as well.

Jeremy’s Predictions:

  • This is the year that some sort of Artificial Intelligence (AI) or Virtual Reality (VR) device goes mainstream. I’m not sure if Mycroft or Echo counts as an AI device, but after playing with the Samsung Gear VR I made the prediction that VR would really take off this year. He specifically stated that the device in question would not be the Oculus Rift.
  • Apple will have a down year, meaning that gross revenues will be lower this year than in 2015. Hrm, I’ve been thinking this might happen but I’m not sure this is the year. In the show they brought up the prospect of Apple making a television, and if that happens I would expect enough fans to rush out and buy it that Apple’s revenues would increase considerably. But without a new product line, I think there is a good chance this could happen.
  • Bonus: a device with a bendable display will become popular. There are devices out there with bendable displays, but nothing much outside of CES. We’ll see.

Bryan’s Predictions:

  • Canonical pulls out of the phone/tablet business. While the Ubuntu phone hasn’t been a huge success, it is the vehicle for exploring the idea of turning a handset-sized device into the only computer you use (i.e. you connect it up to a keyboard and screen to make a “desktop”). I can’t really see Shuttleworth giving this up, but in a mobile market that is pretty much owned by Apple and Android, this probably makes good business sense.
  • In a repeat from last year, Bryan predicts that ChromeOS will run Android apps natively, i.e. any app you can get from the Google Play store will run on Chrome without any special tricks. Is the second time the charm?
  • Bonus: Wayland will not ship as the default replacement for X on any major distro. Probably a safe bet.

Jono’s Predictions:

  • The VR Project Morpheus on Playstation will be more popular than Oculus Rift. Another VR prediction, and it is hard to argue with his logic. Sony already has a large user base with its Playstation 4 console, and if this product can actually make it to market with a decent price point, you can expect a lot of adoption. Contrast that to the Oculus Rift, whose user base is still unknown, plus an estimated price tag of US$600 and the need for a high end graphics computer, and Morpheus has a strong chance to own the market. Making it to market and the overall user experience will still determine if this is a winner or a dud.
  • Part of Canonical will be sold off. Considering that Canonical has a number of branches, from its mobile division, the desktop and the cloud, the company might be stretched a little thin to focus on all of them. Plus, Shuttleworth has been bank-rolling this endeavor for awhile now and he may want to cash some of it out. Moving the cloud part of the company to separate entity makes the most sense, but I’m not feeling that this will happen this year.
  • Bonus: a crowdfunding campaign will pass US$200MM. The current record crowdfunding campaign is for the video game Star Citizen, which has passed US$100MM, so Jono is betting that something will come along that is twice as successful. As I’ve started to sour on crowdfunding, as have others I know, it would have to be something pretty spectacular.

Stuart’s Predictions:

  • People will stop carrying cash. Well, duh. It is rare that I have more than a couple of dollars on me at any time. Now, this is different when I travel, but around town I pay for everything with a credit card. I get the one bill every month and I can track my purchases. Heck, even my favorite BBQ joint takes cards now (despite what Google says). Not sure how they will score this one.
  • Microsoft will open source the Microsoft Edge browser. Hrm – Microsoft has been embracing open source more and more lately, so this isn’t out of the realm of possibility. If I were a betting man I’d bet against it, but it could happen.
  • Bonus: he was going to originally bet that Canonical would get out of the phone business, but since Bryan beat him to it he went with smaller phones would outsell larger phones in 2016. It’s going to be hard to measure, but he gets this right if phones 5 inches and smaller move more units than phones bigger than that. I don’t know – I love my Nexus 6 and I think once you get used to a larger phone it is hard to go back, but we’ll see.

The gang seemed pretty much in agreement this year. No one joined me in the prediction that a large “cloud” vendor would have a significant security issue, but both Jono and Jeremy mentioned VR.

The next segment was on a product called the “Coin“. This is a device that is supposed to replace all of the credit cards in your wallet. Intriguing, but it has one serious flaw – it doesn’t work everywhere. If you can’t be sure it will work, then you end up having to carry some spare cards, and that defeats the whole purpose. Coin’s website “onlycoin.com” seems to imply that Coin is the only thing you need, but even they admit there are problems.

It also doesn’t seem to support some of the newer technologies, such as “Chip and PIN” (which isn’t exactly new). This means that Coin is probably dead on arrival. Jeremy brought up a competitor called Plastc, but that product isn’t out yet, so the fact that Coin is shipping gives it an advantage.

I don’t carry that many cards to begin with, so I have little interest in this. I’d rather see NFC pay technologies take off since I usually have my phone with me. I need more help with my “rewards” cards such as for grocery stores, and there are already apps for that, like Stocard. I don’t see either of these things taking off, but I give the edge to Plastc over Coin.

Note: Stocard is pretty awesome. It is dead easy to add cards and they have an Android Wear integration so I don’t even need to take the phone out of my pocket.

The last segment was an interview with Jorge Castro (the guy from Canonical’s Juju project and not the actor from Lost). Juju is an “orchestration” application, and while focused on the Cloud I can’t help but group it with Chef, Puppet and Ansible (a friend of mine who used to work on Juju just moved to Ansible). Chef has “recipes” and Juju has “charms”.

I don’t do this level of system administration (we are leaning toward using Ansible at OpenNMS just ’cause I love Red Hat) thus much of the discussion was lost on me (lost, get it?). I couldn’t help but think of my favorite naming scheme, however, which comes from the now defunct Sorcerer Linux distribution. In it, software packages were called “spells” and you would install applications using the command “cast”. The repository of all the software packages was called the “grimoire”.

Awesome.

The show closed with a reminder that the next BV would be Live Voltage at the SCaLE conference. I’ve seen these guys get wound up in front of 50 people, so I can’t imagine what will happen in front of nearly 1000 people. They have lots of prizes to give away as well, so be there. I can’t make it but I hope there is a live stream and a Twitter feed like the last Live Voltage show so I can at least follow along. I can’t promise it will be good, but I can promise it will be memorable.

So, overall not a great show but not bad. I don’t like the title, and if you listen to the Outro you might agree with me that “Huge Bag Full of Nickels” would have been a better one.

by Tarus at January 15, 2016 09:31 PM

January 13, 2016

Adventures in Open Source

Annual LinuxQuestions Poll

Just a quick note that the annual LinuxQuestions “Member’s Choice” poll is out. While I don’t believe OpenNMS is known to many of the members of that site, if you feel like showing it a little love, please register and vote.

http://www.linuxquestions.org/questions/2015-linuxquestions-org-members-choice-awards-117/network-monitoring-application-of-the-year-4175562720/

Many thanks to Jeremy Garcia for maintaining that site and including OpenNMS.

by Tarus at January 13, 2016 10:58 PM

January 07, 2016

Adventures in Open Source

Capitalism and the Open Source Way

I’m supposed to be on vacation today. My 50th birthday is coming up and I’m taking some time off to celebrate and reflect. But Jan Wildeboer posted a link to a critical article about a recent Paul Graham essay, and it touched a nerve. I wanted to write down a few thoughts about it while they were fresh.

In the essay, Graham boasts about increasing income inequality. It’s the new version of “greed is good“. He proposes that the best method for modeling democracy is that of the startup. I can’t agree with that.

Look, I work at a ten-year-old startup, but that isn’t what Graham means. He means the Silicon Valley startup which follows this basic model:

1) Come up with an idea
2) Get some rich people to give you money to pursue the idea

If you get past Step 2, this is considered “a success” because if a rich guy wants to give you money your idea must be good, right?

3) Burn through that money as fast as you can in search of turning your idea into something people will watch, download, share or buy
4) Run out of money
5) Get more money
6) Go back to step 4, eroding your share of the idea until the rich people own it

Success is then measured by an acquisition or IPO. Failure is that you can’t get past step 5 at some point.

I can’t remember who told me this, so I do apologize for not being able to credit you, but it was pointed out to me that a lot of startups tend to hit the US$5MM revenue mark and then stall. The reason, she said (and I do believe it was a she) was that startups are aimed at the culture of Silicon Valley, and quite frequently an idea that works in the Valley doesn’t work elsewhere.

The Valley consists mainly of young, white and Asian males. I’ve spent a lot of time in the Valley, and while I’ve met a lot of amazing people, I’ve met an equal number of assholes. The latter seemed to measure value strictly on wealth, and they pursue money above all else (“go big or go home”). Look, I think money is great, it can provide options and security, but the sole pursuit of money is not a good way to live. If I have any wisdom to impart after 50 years it would be to buy experiences, not things. The former will last a lot longer.

And this shameless pursuit of money, in both the Valley and on Wall Street, is creating a huge wealth inequality. From what I could find on the web, the average software engineer in the Valley makes around US$150K. Meanwhile, for the same year the average household income was a little over US$50K, so a third of that probably with more than one person working.

People will defend those salaries because they say they are valuable, but if we are talking about a startup-driven economy, most startups both lose money and eventually fail. So I’m not sure it can be defended on value creation. Plus, as the wealth gap gets larger and larger, there is a real, non-zero chance of a whole lot of people with baseball bats storming those gated communities.

When I was younger and took my first Spanish class, the teacher told us that many countries in South and Central America, where Spanish is spoken, had turbulent political histories. She explained that it was often due to wealth inequality. When you have a small but significant group of rich people and a whole lot of poor people, those at the “top” don’t tend to stay there. She then pointed to the US and its large middle class, and argued that it was one of the reasons we’ve been around for 200+ years.

Also, back in the “old days”, if you asked a kid to list jobs you’d get things like teacher, policeman, doctor, janitor, nurse, mailman, lawyer, baker, fireman and, my favorite, astronaut.

Those are wonderful, productive roles in society. Sure, the doctor and lawyer made more money, but we didn’t look down on the janitor (I can remember really liking the janitor at our elementary school and thinking he was so nice to keep our school clean). But somewhere in the last ten to twenty years, we’ve seemed to lose our way as a culture and we look down on a lot of these jobs. The message seems to be “be scared and buy shit” and success is measured on how much shit you can buy.

It’s not sustainable. In finance the idea of “grow, grow, grow!” is considered the goal. In nature it’s called “cancer”.

This is one reason I love my job. At OpenNMS our business plan is simple: spend less than you earn. The mission statement is: help customers, have fun, make money.

A lot of that comes from the fact that we base our business around open source software. One of the traditional methods for securing profit in the software industry, especially the Valley, is to lock your customers into your products so they both become reliant on them and are unable to easily switch. Then you can increase your prices and … profit!

In order to do this, you have to have a lot of secrets. Your code has to be secret, your product roadmap needs to be secret, and you have to spend a lot of money on engineering talent because you have to find highly skilled specialists to work in such an environment.

Contrast that to open source. Everything is transparent. The code is out there. The roadmap is out there. This week is the CES show in Las Vagas where products will be “unveiled”. We don’t unveil anything – you can follow the development branches in our git repository in real time. While I am lucky to work with highly skilled people, they found OpenNMS, not the other way around, because they had something to offer. Our customers pay us a fair rate for our work because if it isn’t worth it to them, they don’t have to buy it.

This has allowed OpenNMS to survive and, yes, grow, over the last decade while a number of startups have come and gone.

This transparency is important to the “open source way“. It promotes both community and participation, and it is truly a meritocracy, unlike much of the Valley. In the Valley, value is measured more by how much money you make and who you know. In open source, it is based on what you get done and how well you advance the project.

[Note: just to be fair, I know a number of very talented people in the Valley who are worth every penny they make. But I know way more people who, in no way, earn their exorbitant salaries]

Another comment that triggered this post was a tweet by John Cleese about a quote from Charlie Mayfield, the Chairman of the John Lewis Partnership which is a huge retail concern in the UK. He said “… maximisation of profit is not our goal. We aim to make sufficient profit.”

Sufficient Profit Tweet

What a novel idea.

I’m sure my comments will be easily dismissed by many as just the ranting of an old fart, similar to “get off my lawn”. But I have always wished for OpenNMS to be, above all else, something that lasts – something that survives me and something that provides value long after I’m gone. Would I like more money? Of course I would, but for longevity the focus must be on creating value and providing a great experience for those who work on the project, and the money will come.

After all, it is the experience that lasts.

by Tarus at January 07, 2016 03:59 PM

January 06, 2016

Adventures in Open Source

The Inverter: Episode 57 – Deck the Blockchains

The last Bad Voltage of 2015 is a long one. Bryan is out sick, which is surprising since he only misses the shows with which I’m involved, so I guess he was really sick this time.

Since the first BV episode of the year includes predictions, the last one of the year is used to measure how well the guys did, and this was the topic of the first part of the program.

Aq predicted that mobile phone payments via NFC (such as Apple Pay and Android Pay) would increase greatly. They did, but by more than an order of magnitude than the amount he predicted. I’m not sure why he didn’t get credit for this one since he was correct, he just missed a zero at the end. He also predicted that Steam game consoles would be a big success. One of the issues with measuring these predictions is that it is hard to get verifiable numbers, but they all agreed that had Steam shipped a million consoles they would have mentioned it.

His “extry credit” prediction was that Canonical would get bought. They didn’t, so Aq didn’t do so well overall.

Then they moved on to Jono. He predicted there would be a large migration away from traditional sources of video, such as cable television and satellite, to streaming services such as Netflix and Hulu. This was again hard to verify (remember the quote that there are lies, damned lies and statistics). I think one of the reasons is that, especially in the case of cable, the vendors bundle so much together that it is usually cheaper to get television included as part of a package instead of just going Internet-only. Considering how many people talk about shows that are only available via streaming services and how clients for those services are now ubiquitous in televisions, it seems to be a safe bet that people are spending more of their time watching those services, at the cost of traditional shows, but it is very hard to measure with any level of objectivity.

Speaking of televisions, Jono also predicted a surge in 4K televisions to the point that they would be available for $500 or less. I haven’t seen it. The content is just not there yet, and while, yes, you can buy a 4K TV on Amazon for less than US$500, no one who really cared about the quality of that picture would buy one. The best 4K TV recommended by Wirecutter is still nearly US$1600.

So I don’t think he should get credit for that one.

His extra prediction was a large increase in “connected homes”. This was vague enough to be impossible to measure, but with products like those from Nest becoming more popular, it seems inevitable. I think there was definitely a jump in 2015, but then again going from nearly zero to only a handful would still be a huge increase, percentage-wise. I think it will be some time before a majority of homes in the US are “connected” in an Internet of Things fashion.

Jeremy’s predictions were next. He predicted that laptop and desktop computer sales would actually go up after years of decline, and while the rate of decline slowed, this was a miss.

The guys gave him his second one, which was that wireless charging for portable devices would become the norm (with a notable exception in Apple). While I’m charging my Nexus 6 right now on a TYLT charger, the latest generation of Nexus phones do not support wireless charging, and with the introduction of USB-C and “fast charging” I think wireless charging has peaked. Still, he got credit for it, so I think Aq should get credit for his mobile payments prediction.

Jeremy had two bonus predictions. One was that the markets would both see a peak in the NASDAQ index (which happened) as well as a correction of more than 10% (which also happened). His prediction of an Uber IPO did not happen, however.

Bryan wasn’t around to defend his predictions, but in the first case it was the opposite of Aq’s prediction that Steam consoles would be a huge success with the prediction that they would ship zero units. That didn’t happen, of course.

He also predicted that Ubuntu phone sales would be minor compared to other “open source” handset units such as those from Jolla. While no one would claim the Ubuntu phone was a runaway success, from what can be guessed from various sales figures, it seems to have sold about as well as the options.

Finally, his bonus prediction would be that ChromeOS would be able to run all Android apps natively. That, too, didn’t happen. It would have been interesting to hear his analysis of his performance, but he was pretty blunt in that he totally expected to lose.

So, Jeremy wins.

The second segment was a bit heady even for these guys. It concerns an announcement by the Linux Foundation to promote the creation of “block chain” tools.

Now, I kind of think I have my brain around block chains, but don’t expect me to explain them. It was invented as part of the bitcoin protocol, and it is a type of ledger database that can confirm transactions and resist tampering. This can be useful, since it provides a very distributed and public way of running a list of transactions, but there is not requirement that the block chains themselves be made public.

The idea is that we could promote this for use in, say, banking, and it could both improve speed and reliability.

I’m not sure it made a great topic for the show, however. This is esoteric stuff, and for once there were a lot of pregnant pauses in the discussion. I think the overall consensus was that this is a Good Thing™ but that in practical use the data won’t be very open.

The next segment was a review of the Titan USB cable – a hardened USB cable to resist damage. While not bad for a last minute substitution since Bryan was unavailable to do his originally scheduled review, I thought the discussion went on way too long on an already long show. TL;DR: – break a lot of USB cables? You might want to check this out. No? Don’t worry about it.

While the cable part of the Titan is well protected, the connector ends, a common source for failure, aren’t much different from a normal cable. Considering the cost, if you only damage a cable occasionally, it probably isn’t worth it to get a Titan.

At least it wasn’t about that $500 gold HDMI cable. The thing I love about digital is that it pretty much works or doesn’t work. I used to agonize over analog speaker cable, but cable quality is considerably less important in a purely digital realm.

The final segment concerned an apparent conflict of interest around the Linux Foundation’s role in the lawsuit involving the Software Freedom Conservancy and VMware concerning GPL violations. There are a lot of corporate interests involved with the Linux Foundation, and the general question asks if the Foundation is more concerned with protecting those interests than software freedom?

My own experience with GPL enforcement is that it is a shit job. Many people think that if the software is “free” they should be able to do whatever they want to with it, and so they don’t understand the problem when some third party decides to commercialize your hard work.

Next, discovery is a pain. If you can see the code, it is somewhat easy to determine if it was the same or different as another piece of code, but the problem with GPL enforcement is usually the code in question is closed. Discovery costs a lot of money as well, and money is not something a lot of open source projects have in abundance.

Finally, even if you have a case, getting a judge that can understand the nuances of the issue is harder still. Without such an understanding, it is both hard to win the case as well as to get damages. Even if you succeed, the remedy might just mean open sourcing part of the infringing code with no monetary damages.

When you look at it, pursuing a GPL violation is a thankless job that most projects can’t even consider. But it is incredibly important to the future of free software that those who create it have the power to determine under what conditions their work can be used. It is why we donate to the Software Free Conservancy. They are fighting the good fight, in very much a David and Goliath scenario, for the rights of everyone involved with free software. There are not many people up to that task.

For example, it appears that the car manufacturer Tesla is in violation of the GPL. Telsa is popular and well funded. There are very few people, especially those in the technology industry, who wouldn’t want to own a Tesla. So, do you want to sue them? First, they will bury you in legal procedures that will drain what little funds you have. Next, people will be mad at you for “attacking” such a cool company. Third, your chance for success is slim.

Now I don’t have any experience with the Linux Foundation. I don’t know anyone there and I’ve never been to their conferences. I think they can play an important role in acting as a bridge between traditional corporations and the free and open source software community. It seems to me that they are at a crossroads, however. If they allow large companies like VMWare to control the message, then they will eventually become just another irrelevant mouthpiece for the commercial software industry. Yes, that stand may cost them contributions in the near term, but if they truly want to represent this wonderful environment that has grown up around Linux, they have to do it.

I just went and looked up the compensation of the officers of the Linux Foundation. This is an organization with income around US$23MM per year (in 2014). The Executive Director makes about US$500K per year, the COO a little more than that, and there are a number of people making north of US$200K. In fact, of the roughly US$7.5MM salary expense, a third of that went to eight people. Considering that much of the Linux Foundation income comes from corporate donations, I think these eight would have a strong incentive to act in a way to protect those donations, even at the expense of Linux and open source as a whole.

Let’s compare that to the Software Freedom Conservancy. For the same time period they had about US$868K in total revenue, so about 1/30th of that of the Linux Foundation. They only have one listed employee, Bradley Kuhn, with a reasonable salary of US$91K a year (with total compensation a little north of US$110K).

Who would you trust with defending your rights concerning free software? Eight people who together make more than US$2.5MM a year from corporate sponsors or one guy who makes US$100K?

It’s funny, I wasn’t very upset about this segment when I listened to it, but now that I’m investigating it more, it is starting to piss me off. I expect someone in the Valley to defend those high salaries for the Linux Foundation as part of doing business in that area, so I looked up a similar organization, the Wikimedia Foundation. Twice as large as the Linux Foundation, their Executive Director makes around US$200K/year.

Grrr.

I’m going to stop now since I’ll probably write something I’ll regret. For full disclosure I want to state that I’ve known Bradley Kuhn for several years, and even though we tend to disagree on almost everything, I consider him a friend. I also know that Karen Sandler has joined the Software Freedom Conservancy in a paid role in 2015, so their salary expenses will go up, but I’d bet my life that she isn’t making US$500K/year. Finally, remember that if you shop at Amazon be sure to go to smile.amazon.com and you can choose a charity to get a small portion of your purchase donated to them. I send mine to, you guessed it, the Software Freedom Conservancy.

Getting back to Bad Voltage, the show ended with a reminder that the “best Live Voltage show ever” will happen at the end of the month at the Southern California Linux Expo conference in Pasadena. You should be there.

Since the next show will be about predictions for 2016, I’m going to throw my two into the ring.

First, a well known cloud service will experience a large security breach that will make national headlines. I won’t point out possible targets for fear of getting sued, but it has to happen eventually and I pick this to be the year.

Second, by Christmas, consumer virtual reality will be the “it” gift. We’re not there yet, but I got to play with a Samsung Gear VR headset over the holidays and I was impressed. It is a more polished version of Google Cardboard although still based on a phone, and it is developed by Oculus, the current leaders in this type of technology.

While the resolution isn’t great yet, the potential is staggering. I watched demos that included a “fly along” with the Blue Angels, and although the resolution reminded me of early editions of Microsoft’s Flight Simulator, it was cool if not a little nauseating.

There was a Myst-like game called “Lands End” that was also enjoyable, although once again the low resolution detracted from the experience.

Then I played Anshar Wars. It was a near perfect VR experience. A first-person space shooter, you fly around and dogfight with the bad guys while dodging asteroids and picking up power-ups. No headaches, no complaints about resolution, it was something I could have played for hours. Note that it helped to be in a swivel chair ’cause you swing around a lot.

So those are my predictions. Since I doubt I’ll have the stamina to keep up with these posts, I’ll probably never revisit them, but the chance will improve if I’m right.

by Tarus at January 06, 2016 03:45 PM

January 05, 2016

Adventures in Open Source

♫ Don’t Call It a Comeback ♫

Welcome to 2016. My year started out with an invitation to join the AARP. (sigh)

As my three readers know, when it comes to this business of open source we are pretty much making things up as we go along. We are lead by our business plan of “spend less money than you earn” and our mission statement of “help customers, have fun, make money” but the rest is pretty fluid.

In 2013 we mixed things up and tried a more “traditional” start up path by seeking out investment and spending more money than we had. It didn’t work out so well.

Thus 2014 was more of a rebuilding year as we tried to move the focus back to our roots. It paid off, as 2015 was a very good year. We had record gross revenues, and although we didn’t make much money on the bottom line, it was positive once again. At the moment we are still investing in the company and the project so pretty much every extra dollar goes into growth.

And we had a lot of growth. The decision to split OpenNMS into Meridian and Horizon paid off in three major Horizon releases. Horizon 17 was an especially large and important release as it brought in the Newts integration. At the moment we are working with it on a customer site using a ScyllaDB cluster capable of supporting 75K inserts per second. The technologies introduced in 2015 will make it in to Meridian 2016, due in the spring, and it should solidify OpenNMS as a platform that can really scale.

In 2015 we also received orders from two of the Fortune 5 companies. I’ll leave it as an exercise to the reader to guess which two and you have a 1 in 16 shot at getting it right (grin). The fact that companies that can choose, literally, any technology they want yet they choose OpenNMS speaks volumes.

One of these days we’re going to have to figure out a way to talk about our customers by name, since they are all so cool. We are working on it, but it is surprisingly difficult to get permission to publicly post that information. Above all we respect our clients’ privacy.

I have high expectations for 2016 and the power of the Open Source Way. Thanks to everyone who has supported us over the last decade and more, and we just hope you find our efforts provide some value.

Happy New Year.

by Tarus at January 05, 2016 05:41 PM

December 24, 2015

Adventures in Open Source

The Inverter: Episode 56 – Moon Pigeons

A bit more navel-gazing than normal, the latest Bad Voltage clocks in at nearly 90 minutes. Whew.

It was nice that Jeremy was back, and I found it hilarious that in the past two weeks he hadn’t bothered to listen to the show he missed. Considering the fact that that show was one of the shortest of the year, I guess we know who is doing all the talking. Or, as Jono points out, Jeremy is the one who clutters up everything with facts. I thought Aq’s audio was a bit off at the beginning (it sounded like he was down a well) but it seemed to get better as the show progressed.

The first segment concerned the failure of open source mobile projects like FireOS and Jolla. I thought this bit ran long, but there were some gems to be had. Bryan was talking about running Linux on tiny mobile devices for which he was mercilessly teased, but I had to agree with him. While I would never want to be forced to run LibreOffice exclusively on a device the size of my Nexus 6, sometimes it would be nice to be able to do quick edits on the go. I hate using ssh on my handy, but when I need it, I need it.

Jono points out that a lot of people tie their personal identity to their mobile devices. A lot of the way people interact with each other these days is through SMS, Facebook and Instagram, and the constant use of an iPhone or an Android phone can cause people to get very attached to them. Any new challenger to the iOS/Android juggernaut has to not only support those apps, they have to overcome the fact that people (to some degree myself included) have strong ties to their technology choices. Unlike how OpenStack disrupted the nascent cloud market, it seems to be hard for open source projects to do the same in the mobile arena, and I had to laugh when Aq suggested replacing “disrupt” with “f*ck up”.

It was pointed out that if companies like Microsoft who can throw tens of billions of dollars at a market can only garner a little over 2% market share, it is doubtful that a new open source project would have better success.

On a side note, I just spend a few days up on the Eastern Shore of Maryland and the client liked to use Surface Pro tablets. I got to see them in action, and they are pretty amazing – for many they could be a laptop replacement just like the ads suggest. But I doubt that Microsoft is going to dent iPad sales just because of the brand Apple has built. Often it is not the superior technology that wins.

The second segment was a review of a couple of security cameras that Jeremy was trying out: the Arlo by Netgear and the Guardian DCS-2630L by D-link.

I have a couple of cameras at my place, although I don’t have the budget of these guys. Inside buildings I have the D-Link (DCS-5010L) which is a great little camera. It does pan and tilt and works in low light conditions. Since it wouldn’t do well outside, I have the Agasio A602W which is no longer available.

Why neither of them are totally wireless (i.e. you have to plug them in) they are both supported via open source tools like Zoneminder, although with the purchase of my Synology box I just use the Surveillance Station app that comes with that. It can continuously record, record only when motion is detected, etc., and you can set how much video to store per camera. I really dislike the thought of video from my house going “to the cloud” so I love the fact that I can control where it goes, and Synology has a mobile app that lets me access the video whenever I want it (plus, my DSL upstream would suck for constantly uploading video). The Arlo does seem to be compatible with the Surveillance Station, so as Jeremy’s pick I might have to try one out.

[UPDATE: WCCFTech is full of crap amd the Arlo is not compatible with the Surveillance Station]

One last comment from Aq brings up a coming issue with the Internet of Things. All of these toys should play nice together, but often they don’t. He calls it “IoT lockout” but I like “Internet of Silos” (i.e. Z-Wave vs. ZigBee). I do like how most of these cameras have a web interfaces where the video stream can be accessed by a URL, which means third party tools can access and integrate with them, but I can expect vendors to start locking stuff like that down to force people into their own particular cloud infrastructure.

The third segment concerned the “Luna Ring” – an idea started a few years ago by a Japanese engineering firm to ring the moon with solar panels and beam the energy back to Earth via microwave and lasers. I did laugh out loud at Jono’s comment that the name sounds like a contraceptive device.

Odd names aside, I think this is both a cool idea and one that will never happen. The guys point out some of the obvious flaws, but I can’t help but think of the resistance the world would have to high powered beams of light focused on points on the Earth. Sounds like something a James Bond villain would think up.

I did get embarrassed for my home state when it was brought up that the town of Woodland, NC, recently voted down zoning for a solar farm. The click-bait reason given was that one citizen pointed out that solar farms would “suck up all the energy from the sun”.

(sigh)

The actual story is a little more involved. There are already three solar farms in the area surrounding a local substation, so the town is obviously not anti-solar. Small towns like Woodland are getting hit hard with the decline of manufacturing, so I can see the residents there being frightened and looking for a scapegoat. Still, I had to be embarrassed by some of the comments, and it is obvious our educational system needs some work (but that’s a totally different topic).

One person commented that the solar panels were killing the plants. That reminded me of a project my friend Lyle produced called “solar double-cropping“.

As I write this, it is over 72F (22C) on Christmas Eve, the hottest Christmas Eve on record. Our climate is changing and plants that used to thrive are having issues. The idea of solar double-cropping is to use shade from solar panels to help those plants while generating electricity.

And yes, they came up with it in North Carolina.

The final segment was a “year in review”. The guys lamented the lack of innovation, but there were some good things, too. As a “freetard” (someone who runs open source software almost exclusively) I had to agree with Aq that those of us who feel this way are having to compromise less and less as the open source options get better (although I still have to tease him about the compromise he made for his closed source One Plus X phone).

We saw high definitions pictures of Pluto. I’m still amazed that nine years ago, we as a civilization chucked a bunch of metal up into space and it managed to rendezvous with a planetoid without major issues. We lobbed another piece of metal at a comet, and while not as successful it was still quite a feat.

In entertainment, the amazing Mr. Robot television series offered us a portrayal of hacking that wasn’t totally made up.

Speaking of entertainment, the show closed with a reminder that Live Voltage will be happening at next month’s SCaLE conference. If you can, you should go, and they are still accepting ideas for “upSCALE” talks. From their latest e-mail:

UpSCALE Talks: There is still room for an UpSCALE Talk or two – UpSCALE Talks are held in the style of Ignite presentations offered at various O’Reilly-sponsored events where participants are given five minutes with 20 automatically-advancing slides. Those interested in submitting an UpSCALE Talk can submit through the SCALE CFP system – https://www.socallinuxexpo.org scale/14x/cfp – and mark your talk with the UpSCALE tag.

So that’s it for 2015. I’m off to put on some shorts and sunscreen. ♫ Oh the weather outside is frightful … ♫

by Tarus at December 24, 2015 03:42 PM

December 10, 2015

Adventures in Open Source

Mint 17.3 (Rosa) on the Dell XPS 13 (9343)

I’m a big fan of the Dell XPS 13. It is the first laptop I’ve felt an emotional attachment to since my first Powerbook. The only issue is that I have not been able to run my distro of choice, Linux Mint, due to severe issues with the trackpad.

Mint on XPS

With the release of Mint 17.3 (Rosa) I decided to give it another shot. I burned the image to a USB stick and booted to it, and the trackpad issues were gone.

Yay!

So I based my system and installed Mint. I did have to use a wired network connection since the Broadcom drivers don’t seem to work on install (there is probably a way around that) but once installed they were easy to enable.

One thing I liked about Mint when I had installed it previously was that it recognized the HiDPI screen of the XPS right away. Even though the “What’s New” page says that HiDPI detection has been improved in 17.3, I found that it had regressed and I needed to squint to get the O/S installed. Once I did, however, I was able to go to Settings -> General and switch to HiDPI mode and everything was fine.

Mint HiDPI Setting

Now, the XPS hardware is so new that it really requires a 4.2 kernel. I decided to install it. No biggie, since I had to do it with Ubuntu 15.04, but I’ll be happy when Mint 18 comes out and it is supported natively (you have to do some apt magic to ignore kernel updates). Once installed, my wireless connection failed to work, and that’s where the fun began.

Usually, all I had to do was reinstall the bcmwl-kernel-source package, but this kept failing with an error. I even built the package from source but while it built just fine, DKMS would fail when installing it, complaining about “-fstack-protector-strong”. Turns out this was added in gcc 4.9 and Mint 17.3 ships with gcc 4.8.

(sigh)

Anyway, not hard to fix. I ran the following commands:

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get dist-upgrade
sudo apt-get install gcc-4.9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 70

and now gcc 4.9 was my default compiler. I then rebuilt and installed the bcmwl-kernel-source pacakge and things were golden.

$ modinfo wl
filename:       /lib/modules/4.2.6-040206-generic/updates/wl.ko
license:        MIXED/Proprietary
srcversion:     D46E6565F844EFBD46CE0FC
alias:          pci:v*d*sv*sd*bc02sc80i*
depends:        cfg80211
vermagic:       4.2.6-040206-generic SMP mod_unload modversions 
parm:           passivemode:int
parm:           wl_txq_thresh:int
parm:           oneonly:int
parm:           piomode:int
parm:           instance_base:int
parm:           nompc:int
parm:           intf_name:string

Just like with Ubuntu Gnome, I did have to manually install the bluetooth driver, but at the moment everything seems to work: wireless, bluetooth, the touchscreen, the clickpad, sleep, backlit keyboard, etc.

Now I use a desktop as my primary machine, so I haven’t really taken the XPS through its paces, but I’m scheduled to travel soon and I’ll be sure to post if I have any issues. I did enable the screensaver and once when I came back to the machine my mouse pointer was gone (the mouse still worked, you just couldn’t see the pointer) and I was unable to fix it without a restart (I tried the suggestions in Google but it didn’t work). For now I’ve just disabled the screensaver.

All in all, great work from the Mint team, and while I actually enjoyed my time with Ubuntu Gnome I’m happy to be back. Looking forward to Mint 18 in the Spring which should require less effort to run on the XPS with built-in support for the 4 series kernel.

by Tarus at December 10, 2015 02:32 PM

December 08, 2015

Adventures in Open Source

OpenNMS Horizon 17 Released

I am extremely happy to announce the availability of OpenNMS Horizon 17. This marks the fourth major release of OpenNMS in a little over a year, and I’m extremely proud of the team for moving the project so far forward so quickly.

There is a lot in this release. One of the major things is support for a new storage backend based on the Newts project. This will enable OpenNMS to basically store unlimited amounts of time-series data. The only thing missing, which should completed soon, is a way to convert all of your old RRD-based data to Newts. Since it will take people awhile to get a Newts/Cassandra instance set up, we didn’t want to hold the rest of the release until this was done. If you are installing OpenNMS from scratch and don’t have any legacy data, the Newts integration is ready to go now.

The team is also making great strides in improving the documentation. There is a better version of the Release Notes there.

Horizon 17 will form the basis for Meridian 2016, which we expect in early spring. The next Horizon release will contain the completed Minion functionality, which adds the ability to distribute OpenNMS so that, along with Newts, OpenNMS will have nearly limitless scalability.

Not bad for a free software product, eh? Remember you can always play with the latest and greatest of any OpenNMS development branch just by installing the desired repository.

Anyway, enjoy, and I’ll be sure to post when the RRD converter is available.

Bug

  • [NMS-5613] – odd index "ifservicves_ipinterfaceid_idx" in database – typo?
  • [NMS-5946] – JMX Config Tool CLI is not packaged correctly
  • [NMS-6012] – Statsd randomly looks for storeByForeignSource rrds
  • [NMS-6478] – 'Overall Service Availability' bad info in case of nodeDown / nodeUp transition
  • [NMS-6493] – Running online report "Response Time Summary for node" produces Unexpected Error
  • [NMS-6555] – Outdated Quartz URL in provisiond-configuration.xml file
  • [NMS-6803] – Not evaluating threshold for data collected by HttpCollector
  • [NMS-6927] – test failure: org.opennms.web.alarm.filter.AlarmRepositoryFilterTest
  • [NMS-6942] – test failure: org.opennms.web.svclayer.DefaultOutageServiceIntegrationTest
  • [NMS-6944] – When building the "Early Morning Report" I get a "null" dataset argument Exception.
  • [NMS-7000] – Early Morning Report will not run correctly without any nodes in OpenNMS
  • [NMS-7001] – Availability by node report needs a "No Data for Report" Section
  • [NMS-7024] – Event Translator cant translate events with update-field data present
  • [NMS-7095] – Topology Map does not show selected focus in IE
  • [NMS-7254] – MigratorTest fails on two of the 3 tests.
  • [NMS-7407] – Inconsistent naming in Admin/System Information
  • [NMS-7411] – Fonts are too small in link detail page
  • [NMS-7417] – Fix header and list layout glitches in the WebUI
  • [NMS-7459] – Dashboard node status shows wrong service count
  • [NMS-7516] – XML Collector is not working as expected for node-level resources
  • [NMS-7600] – build failure in opennms-doc/guide-doc on FreeBSD
  • [NMS-7649] – etc folder still contains references to capsd
  • [NMS-7667] – Vaadin dashboard meaning of yellow in the surveillance view
  • [NMS-7679] – Audiocodes.events.xml overrides RMON.events.xml
  • [NMS-7680] – JMX Configuration Generator admin page fails
  • [NMS-7693] – Example Drools rules imports incorrect classes
  • [NMS-7695] – Logging not initialized but used on Drools Rule files.
  • [NMS-7702] – Problems on graphs for 10 gigabit interface
  • [NMS-7703] – Database Report – Statement correction
  • [NMS-7709] – Building OpenNMS results in a NullPointerException on module "container/features"
  • [NMS-7723] – PSQLException: column "nodeid" does not exist when using manage/unmanage services
  • [NMS-7728] – Add support for jrrd2
  • [NMS-7729] – Log messages for the Correlation Engine appear in manager.log
  • [NMS-7736] – bug in EventBuilder method setParam()
  • [NMS-7739] – Unit tests fail for loading data collection
  • [NMS-7748] – SeleniumMonitor with PhantomJS driver needs gson JAR
  • [NMS-7750] – Cannot edit some Asset Info fields
  • [NMS-7755] – c.m.v.a.ThreadPoolAsynchronousRunner: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@59804d53 — APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
  • [NMS-7762] – noSuchObject duplicates links on topology map
  • [NMS-7764] – Error when you drop sequence vulnnxtid
  • [NMS-7766] – Incorrect unit divisor in LM-SENSORS-MIB graph definitions
  • [NMS-7770] – HttpRemotingContextTest is an integration test and needs to be renamed as such
  • [NMS-7771] – Fix unit tests to run also on non-US locale systems.
  • [NMS-7772] – JMX Configuration Generator (webUI) is not working anymore
  • [NMS-7777] – node detail page failure
  • [NMS-7778] – Measurements ReST API broken in develop (CXF)
  • [NMS-7785] – OSGi-based Web Modules Not Accessible
  • [NMS-7791] – OSGi-based web applications are unaccesible
  • [NMS-7794] – Cannot load events page in 17
  • [NMS-7802] – JSON Serialization Broken in REST API (CXF)
  • [NMS-7814] – Queued RRD updates are no longer promoted when rendering graphs
  • [NMS-7816] – The DataCollectionConfigDao returns all resource types, even if they are not used in any data collection package.
  • [NMS-7818] – Measurements ReST API Fails on strafeping
  • [NMS-7819] – Requesting IPv6 resources on measurements rest endpoint fails
  • [NMS-7822] – Remove Access Point Monitor service from service configuration
  • [NMS-7824] – The reload config for Collectd might throws a ConcurrentModificationException
  • [NMS-7826] – Exception in Vacuumd because of location monitor changes
  • [NMS-7828] – NPE on "manage and unmanage services and interfaces"
  • [NMS-7834] – Smoke tests failing because OSGi features fail to install: "The framework has been shutdown"
  • [NMS-7835] – "No session" error during startup in EnhancedLinkdTopologyProvider
  • [NMS-7836] – KIE API JAR missing from packages
  • [NMS-7839] – Counter variables reported as strings (like Net-SNMP extent) are not stored properly when using RRDtool
  • [NMS-7844] – Some database reports are broken (ResponseTimeSummary, etc.)
  • [NMS-7845] – New Provisioning UI: 401 Error when creating a new requisition
  • [NMS-7847] – Graph results page broken when zooming
  • [NMS-7848] – Parameter descriptions are not shown anymore
  • [NMS-7852] – UnsupportedOperationException when using the JMXSecureCollector
  • [NMS-7855] – distributed details page broken
  • [NMS-7856] – Default log4j2.xml has duplicate syslogd appender, missing statsd entries
  • [NMS-7857] – Cisco Packets In/Out legend label wrong
  • [NMS-7858] – Enlinkd CDP code fails to parse hex-encoded IP address string
  • [NMS-7861] – IpNetToMedia Hibernate exception in enlinkd.log
  • [NMS-7867] – Duplicate Drools engines can be registered during Spring context refresh()
  • [NMS-7870] – PageSequenceMonitor broken in remote poller
  • [NMS-7874] – The remote poller doesn't write to the log file when running in headless mode
  • [NMS-7875] – Distributed response times are broken
  • [NMS-7877] – HttpClient ignores socket timeout
  • [NMS-7884] – RTC Ops Board category links are broken
  • [NMS-7890] – Remedy Integration: the custom code added to the Alarm Detail Page is gone.
  • [NMS-7893] – LazyInitializationException when querying the Measurements API
  • [NMS-7897] – Statsd PDF export gives class not found exception
  • [NMS-7899] – Deadlocks on Demo
  • [NMS-7900] – JMX Configgenerator Web UI throws NPE when navigating to 2nd page.
  • [NMS-7901] – Incorrect Fortinet System Disk Graph Definition
  • [NMS-7902] – Pages that contain many Backshift graphs are slow to render
  • [NMS-7907] – The default location for the JRRD2 JAR in rrd-configuration.properties is wrong.
  • [NMS-7909] – Missing dependency on the rrdtool RPM installed through yum.postgresql.org
  • [NMS-7917] – Alarm detail filters get mixed up on the ops board
  • [NMS-7921] – Startup fails with Syslogd enabled
  • [NMS-7926] – FasterFilesystemForeignSourceRepository is not working as expected
  • [NMS-7930] – Heat map ReST services just produce JSON output
  • [NMS-7935] – ClassNotFoundException JRrd2Exception
  • [NMS-7939] – HeatMap ReST Xml output fails
  • [NMS-7942] – Apache CXF brakes the ReST URLs for nodes and requisitions (because of service-list-path)
  • [NMS-7944] – Jersey 1.14 and 1.5 jars mixed in lib with Jersey 1.19
  • [NMS-7945] – Incorrect attribute types in cassandra21x data collection package
  • [NMS-7948] – Bad substitution in JMS alarm northbounder component-dao wiring
  • [NMS-7959] – Bouncycastle JARs break large-key crypto operations
  • [NMS-7962] – Missing graphs in Vaadian dashboard when storeByFs=true
  • [NMS-7963] – JSoup doesn't properly parse encoded HTML character which confuses the XML Collector
  • [NMS-7964] – MBean attribute names are restricted to a specifix max length
  • [NMS-7968] – Auto-discover is completely broken – Handling newSuspect events throws an exception
  • [NMS-7969] – JMS alarm northbounder always indicates message sent
  • [NMS-7972] – Querying the ReST API for alarms using an invalid alarmId returns HTTP 200
  • [NMS-7974] – The ICMP monitor can fail, even if valid responses are received before the timeout
  • [NMS-7977] – JMX Configuration Generation misbehavior on validation error
  • [NMS-7981] – The ReST API code throws exceptions that turns into HTTP 500 for things that should be HTTP 400 (Bad Request)
  • [NMS-7985] – New servers in install guide
  • [NMS-7997] – Background of notifications bell icon is too dark
  • [NMS-7998] – Provisiond default setting does not allow to delete monitoring entities
  • [NMS-7999] – Upgrade to commons-collections 3.2.2
  • [NMS-8001] – NPE in JMXDetector
  • [NMS-8004] – Iplike could not be installed following install guide

Enhancement

  • [NMS-1488] – Add option to the <service> element in poller-configuration.xml to specify service-specific RRD settings
  • [NMS-1910] – Additional storeByGroup capabilities
  • [NMS-2362] – Infoblox events file
  • [NMS-3479] – Adding SNMP traps for Raytheon NXU-2A
  • [NMS-4008] – Add A10 AX load balancer trap events
  • [NMS-4364] – Interactive JMX data collection configuration UI
  • [NMS-5016] – Add Force10 Event/Traps
  • [NMS-5071] – Event definition for Juniper screening SNMP traps
  • [NMS-5272] – events definiton file for DSVIEW-TRAP-MIB
  • [NMS-5397] – Trap definition files for Evertz Multiframe and Modules
  • [NMS-5398] – Trap and data collection definitions for Ceragon FibeAir 1500
  • [NMS-5791] – New (additional) event file for NetApp filer
  • [NMS-6770] – New Fortinet datacollection / graph definition
  • [NMS-7108] – DefaultResourceDao should use RRD-API to find resources
  • [NMS-7131] – MIB support for Zertico environment sensors
  • [NMS-7191] – Implement "integration with OTRS-3.1+" feature
  • [NMS-7258] – Unit tests should be able to run successfully from the start of a compile.
  • [NMS-7404] – Create a detector for XMP
  • [NMS-7520] – Remove linkd
  • [NMS-7553] – Add Juniper SRX flow performance monitoring and default thresholds
  • [NMS-7614] – Enable real SSO via Kerberos (SPNEGO) and LDAP
  • [NMS-7618] – Create opennms.properties option to make dashboard the landing page
  • [NMS-7689] – Get rid of servicemap and servermap database tables
  • [NMS-7700] – Add support for Javascript-based graphs
  • [NMS-7722] – Dell Equallogic Events
  • [NMS-7768] – Persist the CdpGlobalDeviceIdFormat
  • [NMS-7798] – Add Sonicwall Firewall Events
  • [NMS-7805] – JMS Alarm Northbounder
  • [NMS-7821] – DNS Resolution against non-local resolver
  • [NMS-7868] – Recognize Cisco ASA5580-20 for SNMP data collection
  • [NMS-7949] – Promote Compass app when mobile browser detected
  • [NMS-7986] – Document how to configure RRDtool in OpenNMS

Story

  • [NMS-7711] – nodeSource[] resource ids only work when storeByFs is enabled
  • [NMS-7894] – Flatten and improve web app style
  • [NMS-7929] – Document HeatMap ReST services
  • [NMS-7940] – Cleanup docs modules

by Tarus at December 08, 2015 07:53 PM

December 07, 2015

OpenNMS.org Blog

Ubuntu Vagrant Box Update

We have updated our Vagrant box hosted on the Atlas platform with latest OpenNMS Horizon 17 pre-configured with RRDtool. This is also the first VirtualBox image which comes with a pre-installed Grafana 2.5 and has the Grafana OpenNMS Plugin as data source installed and is ready to be used. All you h…

December 07, 2015 10:34 PM

Adventures in Open Source

The Inverter: Episode 55 – Faster than Lightning

I started writing these “inverter” posts because many Bad Voltage episodes would raise topics that I felt deserved commentary. By the middle segment in this episode I was screaming at the computer.

So, good show.

First, whoever decided on the cover art gets some points. It references a groaner of a pun Jono makes that gets dropped in the Intro.

Second, also in the intro, we learn that Jeremy Garcia will not be on the show due to jury duty of all things. While I’ve always considered Jeremy one of the calmer and more reasoned members of the team, since this show clocks in at scant 52 minutes maybe he’s the one who drags things out. They did stumble a bit on the whole “… and now, Bad Voltage” line so I do look forward to Jeremy’s return.

Okay, the first segment concerns the “new” economy of begging. It kind of focuses on what we would call “crowdfunding“, but as Stuart points out, crowdfunding usually means that you get something in return. However, with sites like “GoFundMe” the term has been expanded to include outright begging, as in “Dear Internet, help, can you spare a dollar for a sandwich”. A quick perusal of the site with a search in my local area brings up a number of articles ranging from a person who was defrauded by a builder, to two women who want to go to the ACC tournament, to another woman who needs help finishing her Ph.D.

I’m not saying this is a bad thing, as the sucker/minute ratio remains high, but it is a bit different from crowdfunding sites like Indiegogo and Kickstarter where the donors have a non-zero expectation of actually getting something. That is more along the lines of “new economy” than asking strangers to pay for your vacation.

So, let’s talk about those programs. I have to admit I don’t participate in them. Before you go and call me a cheapskate and a leech, I do donate a lot of money to local and free software causes, but I just don’t do it via these programs. I’ve participated in exactly two Indiegogo campaigns and one Kickstarter campaign. Let’s see how they went.

The first time was the Indiegogo campaign for the Ubuntu phone. While I am perfectly happy with my Android phone (more on that later) I support open source efforts and this seemed like a good thing. They were organized and they had realistic expectations for what it would cost. The campaign fell well short of their goal and my money was returned. All in all, I’m okay with that.

The next time was also on Indiegogo. It was for the Angel Sensor wearable health device. I have a keen interest in how my body is behaving as metrics are the key to making successful improvements. The problem is I don’t want to be sending my activity and sleep pattern information to some third party like Fitbit or Jawbone. I was very eager for an open source solution.

I’m still waiting.

Plagued by production problems and lack of communication, I have no idea if I’ll ever see the device on which I spent US$178. The one person I knew there is on “a well deserved leave”. Furthermore, I’m not sure if they are releasing the server and client code as open source, which I what I was lead to believe was the plan. Finally, the first app they wrote for it is for the iPhone of all things, which makes me think that their dedication to open source is a bit lacking. At this point in time I’ve written the whole thing off.

When the Mycroft project did the crowdfunding thing, I was sorely tempted to buy in, but my experience with Angel has made me cautious. I think a lot of technology-based projects severely underestimate what is needed to be successful. They aim low and then trumpet when their stretch goals are met, only to wake up later to the fact that it is going to be a lot harder to deliver than they thought, like the hangover after a big bender.

Please note that I’m not saying this will happen with Mycroft, I wish them all the luck in the world, it’s just that I’ll shell out a few extra ducats for the finished thing when it arrives rather than gamble.

Does anyone remember Diaspora? It was the open source, distributed Facebook. I thought the project was dead, but it is apparently still around, although the pressure of delivering on it is blamed for the suicide of one of the co-founders. Diaspora was one of the most successful Kickstarter projects at the time.

This isn’t to say that these things always fail. The “Exploding Kittens” project was phenomenal and while I haven’t played it I’ve given it as a gift and people say it is a lot of fun. This is where I think crowdfunding can shine – in creative projects where the sponsors have a huge amount of control over the product. I’ve heard of a number of successful movie, music and video projects that were crowdfunded without problems.

Which brings me to my one foray into Kickstarter. I’m a huge fan of the band De La Soul. To me they were the first nerdcore hip-hop group. When hip-hop seemed solely focused on “bitches ‘n hos,” De La Soul was delivering thoughtful, fun and energetic music. When they announced their Kickstarter for a new album, I signed up and ordered the album to be digitally delivered on a 1GB Posdnuos USB drive set for September delivery.

Well, it ain’t here. (grin)

I really don’t mind – I’d rather the album ship when it is ready (probably next Spring) than for them to release crap on time but I’m basically 0-3 on the whole crowdfunding thing.

I was thinking about this when the second segment started with Aq reviewing his new One Plus X (OPX) phone, giving it a 9 out of 10.

This is when I started yelling.

See, while I have zero experience with the OPX I bought a One Plus One (OPO) and I found One Plus to be one of the most horrible companies on the planet.

I was first introduced to the OPO by some friends in Germany. Here was a powerful phone in an attractive package at a reasonable price. It also ran open source software in the form of a version of Cyanogenmod, a packaged instance of the Android Open Source Project (AOSP). Finally, it was relatively inexpensive. Too good to be true?

It was.

They have an “invite” system in order to even buy the phone, but I managed to wrangle one. While I thought the phone was too big initially, I got used to it and soon I was telling everyone how great it was, just like Stuart does in his review.

But then things started to go sour. The upper half of the digitizer started acting up and so I opened a ticket with support. This is when One Plus started to lie and cheat, trying to wrangle out of the fact that they had a hardware problem. The problem has one topic on their forums that had 125 pages of posts before they closed it, and another that is at 305 pages as I write this. That’s 305 pages of pure horror stories.

So when I say lie, we all know that One Plus is a tiny Chinese firm, yet all of my support replies came from “different” people with traditionally English female names, like Kathy, Leah and Jessa. I think this was a tactic to make us more sympathetic to them since they knew they were going to provide crappy support.

When I say cheat, they refused to honor warranty support and kept asking me to perform a number of increasingly complex tasks culminating in disassembling my phone. When I refused, fearing I would damage it, they refused service, even when I offered to send it to them at my expense.

In my mind, One Plus is pure scum and no one should buy their products. I came extremely close to launching a class action lawsuit against them before I decided I had better things to do than to sue a company that won’t be around in five years.

Seriously, if I had to choose solely between an iPhone and a One Plus phone I’d grab the iPhone so fast I’d break my fingers. Finally, their new OxygenOS is closed source so you are up the same creek as if you had bought a Samsung or other closed Android phone.

So I’m screaming at the computer because I know Aq’s “9 out of 10” review will move people to consider buying one. Don’t! Aq has hooked up with the same skank that did me wrong, and while part of me wishes them well, I know it will end in misery.

But what are the options, you might ask. Samsung is expensive and closed, Google is getting more and more closed, and so perhaps One Plus is the least of the evils.

There are options, but Stuart’s will be pretty limited since he seems to have two huge prejudices. First, he expresses disdain for hold people who root their phones. This is odd, since I don’t think he’d have any issue with buying a laptop that shipped with Windows and putting Linux on it, and this is, after all, a podcast about things hackable. Second, he seems to dislike anyone with a “big” phone.

I love the alternative ROM crowd. These are the true AOSP disciples, and my favorite ROM is OmniROM. I love OmniROM so much that when I need a new phone I work backwards. I start with the list of officially supported OmniROM devices and make my choice from there. While I closely identify with the philosophy behind OmniROM (it was started as a fork from Cyanogenmod when they got tons of VC money and went evil), what I love are the options. You can choose just how many or how few applications you want from the Google ecosystem, which allows you to easily limit what to want to share (note that this is available with almost any alternative ROM), and they turn on a lot of things Google doesn’t, such as “shake to dismiss” in the alarm.

As for size, when I unpacked my OPO I thought the thing was huge. I was using an HTC One and it seemed tiny in comparison. It took me about two days to get used to it. When I replaced the OPO because they are huge douches (or whatever is Chinese for douche) I went with the Nexus 6. Now that is a huge phone, and I’m sure Aq will belittle it.

Know what? After about two days of using it, it felt normal. I love my Nexus 6 running OmniROM. The large screen allowed me to retire my Nexus 7 since I can comfortably watch videos on it when traveling. It has an amazing camera, is extremely fast and gets all the latest Android shiny. In fact, I was amazed that when the new Nexus phones came out I found myself asking myself why in the world would I switch? Plus the Nexus 6 still has wireless charging, which I’ve become used to.

I think Aq’s size issues stem from the fact that everyone thinks that if someone is using a phone bigger than the one they use, those people are crazy. If he spent a week with a Nexus 6, I’m sure his mind would change. Now, he’s given up freedom for a pretty face with a cheap price tag.

Now it seems like I’m picking on Stuart a lot, but I don’t mean to be mean. I love the guy and I want him to be happy, but that little tramp will only bring misery. Mark my words.

If One Plus did you wrong, let him know, but I think it is too late. As with every doomed relationship, when you are in it you can’t see it coming.

Whew.

After the first two, the last segment was pretty conflict free. It concerns the US Department of Justice wanting to force Apple to unlock a phone. I thought this case raised a couple of interesting points.

First the reason they want to force Apple to do it instead of the owner is to avoid the issues of self-incrimination. I never really thought about that before, but it is good to know.

Second, the DoJ is using the logic that since Apple still owns the software on the phone, they should be able to unlock it. Most people (well, non-software people) don’t know or realize that they don’t own most of the software they use. They have just been granted a right to use it. Now Apple (and Google) are taking steps to encrypt phones so even they can’t unlock them. This case involved an older iPhone, but it does make the case for using free software and kudos to Apple for fighting the order.

While there may be a fine line as far as “ownership” is concerned, free and open source software is much more in the hands of the user (you don’t pay for it) so you may have additional protections against self-incrimination when you use it. I am not a lawyer, but it is fun to think about.

The show ended with a reminder that the next Live Voltage show will be at SCaLE in January. I also learned why Bryan missed our little post-show gathering last year – he went to bed.

And here I thought he hated me.

by Tarus at December 07, 2015 04:41 PM

OpenNMS Foundation Europe

[Release] – Ubuntu Vagrant Box Update

We have updated our Vagrant box hosted on the Atlas platform with latest OpenNMS Horizon 17 pre-configured with RRDtool. This is also the first VirtualBox image which comes with a pre-installed Grafana 2.5 and has the Grafana OpenNMS Plugin as data source installed and is ready to be used. All you have to do is run

vagrant init opennms/vagrant-opennms-ubuntu-stable
vagrant up

If you run the default Vagrant box it uses a NAT interface. To have access to the running application from your box just add the following lines in your Vagrantfile:

config.vm.network "forwarded_port", guest: 8980, host: 8980
config.vm.network "forwarded_port", guest: 3000, host: 3000

You want to build the box for a different provider than VirtualBox with packer just fork or contribute to the opennms-packer repository.

We have added a also a quick install script for Debian and Ubuntu.

gl & hf

by Ronny Trommer at December 07, 2015 07:30 AM

December 02, 2015

OpenNMS Foundation Europe

[Release] – OpenNMS 17

We welcome our new release of OpenNMS Horizon 17 with code name Glen Moray.

Like a good single malt Scotch whiskey it took some time to get the release out – but we think waiting was worth it. The most obvious change is the a more slimmed down web app layout.

home-modified

We have added a new visual component to show alarms and outages in a heat map which can be used as an additional component on the start page or as a full screen view.

Screen Shot 2015-12-02 at 23.22.48

The JMX data collection configuration tool is reworked and improved. It allows to interactively create your data collection configuration for your Java applications with JMX.

Screen Shot 2015-12-02 at 23.26.03

The documentation is improved and we have removed unnecessary modules and focus on Release Notes, Installation Guide, User Guide, Administration Guide and Developer Guide.
We moved the content how to develop new documentation from the Documentation Guide to a Developers Guide section.

The distributed components with OpenNMS Minion is introduced in the admin area.

Important to notice Linkd with the SVG map is removed. Enhanced Linkd with Topology view is now the new default.

Otherwise real SSO via Kerberos (SPNEGO) and LDAP is enabled we integrate now with OTRS-3.1+ Ticket system.

We added a JMS Alarm Northbounder to make it easier to integrate OpenNMS in larger management application stacks.

A lot of improvements to the Grafana support was made. We support now the version 2.5.0 of Grafana. The OpenNMS Grafana Data Source allows filtering and trending performance data and the Newts integration is improved.

We have added support for following devices:

  • Added trap support for Infoblox devices
  • Adding SNMP traps for Raytheon NXU-2A
  • Add A10 AX load balancer trap events
  • Add Force10 Traps
  • Event definition for Juniper screening SNMP traps
  • Event definiton file for DSVIEW-TRAP-MIB
  • Trap definition files for Evertz Multiframe and Modules
  • Trap and data collection definitions for Ceragon FibeAir 1500
  • New (additional) event file for NetApp filer
  • New Fortinet datacollection / graph definition
  • Event and data collection support for Didactum Sensors
  • Add Juniper SRX flow performance monitoring and default thresholds
  • Dell Equallogic Events
  • Add Sonicwall Firewall Events

For more details you can go to our Release Notes.

Looking forward to upgrade my systems and Happy Upgrading

gl & hf

by Ronny Trommer at December 02, 2015 11:39 PM

OpenNMS Releases

OpenNMS Horizon 17.0.0

We welcome our new release of OpenNMS Horizon 17 with code name Glen Moray.

Like a good single malt Scotch whiskey it took some time to get the release out – but we think waiting was worth it. The most obvious change is the a more slimmed down web app layout.

We have added a new visual component to…

December 02, 2015 10:34 PM

OpenNMS.org Blog

OpenNMS Horizon 17.0.0

We welcome our new release of OpenNMS Horizon 17 with code name Glen Moray.

Like a good single malt Scotch whiskey it took some time to get the release out – but we think waiting was worth it. The most obvious change is the a more slimmed down web app layout.

We have added a new visual component to…

December 02, 2015 10:34 PM