Friday, February 21, 2020

Ubuntu: from passionate amateurs to competitive advantage

Note: This story was previously published on LinkedIn Pulse.

Innovation happens when our creativity is challenged by the immutable forces which determine the reality we live in. It is natural for innovation to occur spontaneously, triggered by accident, but we can provoke an increase of innovation through a lively exchange of ideas, collaboration, and sometimes stress in the face of a deadline or other external force.

Frustrated by his inability to use commercial UNIX for his 80386 processor, Linus Torvalds famously wrote and subsequently released his Linux kernel as free software to the world. Nevertheless, it took almost 15 years until in his 2005 TED talk, Charles Leadbeater recognized the value of "passionate amateurs" rather than in the isolation of enterprise think groups or centers of excellence. Still, open-source in the enterprise was an internet phenomenon, something for institutions of higher education maybe, but certainly not for the Global 500.

Ten years later, in yet another TED Talk, Stefan Gross-Selbeck mentions the attractive economics of open source, explaining that "it has never been as easy or as cheap as today to launch a digital product or service. [..] open source software, frameworks like Ruby on Rails, all contribute [to this]." He continues to conclude that "size does not matter" anymore in terms of the ability to innovate and then disrupt even multi-billion dollar industries. But it turned out, open-source had even more to offer.

Open source driven creativity graduated from an idea around passionate amateurs, users, to a force able to democratize the entire landscape of innovation.
In 2019, not the use of open source, but the speed by which it can be consumed drives differentiating features into a software-defined market.

Let's look at the latest example of open source innovation. Kubernetes was released by Google in 2015 as an open-source project in the face of stiff competition. Mesosphere and Docker both represented the state of the art in containers as far as the markets were concerned, and it would take until the end of 2017 when users finally concluded that Kubernetes was going to be "it". Since then, the popularity of the container coordination framework soared, with KubeCon Seattle 2018 tapping out at 8,000 visitors with a further 2,000 on the waiting list.

Ubuntu was founded in 2004 on the idea that the collaborative spirit of the open-source world could be transformed and delivered to users with the goal to incentivize them to contribute to, and participate in, the "community of passion" around open source. As we at Canonical celebrate 15 years of delivering open source to our users and customers, we recognize daily the empowerment open source brings by distilling the upstream innovation and creativity and enabling its consumption for everyone.

In many ways, Ubuntu's story developed in parallel to that of open source in general. Maybe because Ubuntu has remained true to providing unmitigated access to the "latest and greatest" it has remained extremely popular with developers and innovators over the years. Today, Ubuntu is leading the field in cloud guests, container host, and base image OS across all major public clouds. Ubuntu runs in the majority of on-premise cloud infrastructure, with over 55% of OpenStack clouds running productive workloads built on Ubuntu. More than two million Ubuntu instances are being launched every single day on the public cloud alone.

As more Global 500 realize that unstifled innovation through the open-source accelerator leads to a maximum gain of competitive advantage against competitors, so does their natural tendency to use and leverage Ubuntu increase. Building on the premise of providing comprehensive, secure and timely access to open source across both infrastructure and application layers remains Canonical's top mission in 2019 and beyond.

Leveraging our Ubuntu Advantage program, thousands of customers have already realized the potential unrestricted access to secure and supported open source brings. With the rapid growth in the open source ecosystem, it is Canonical's commitment to its customers being the most trusted, comprehensive and economical partner driving open source innovation forward. The incredible response from the users, community and beyond that, the market, has been nothing short of humbling and motivating for us. With the leading companies and research institutions in the world leveraging Ubuntu for use cases such as autonomous cars, AI/ML, robots, space exploration, 5G network transformation and GPU acceleration, Ubuntu use and interest in Canonical services have experienced an all-time high and continue to grow at an unprecedented rate.

As we near our anniversary of 15 years of Ubuntu releases, not a bad position to be in. I for one hope you enjoy the upcoming 19.10 release, which carries special significance to long-time users and Linux enthusiasts like me and join in a reflection of just how fundamentally free and open source software has changed the technology landscape.

Friday, February 14, 2020

For this years Valentine's, a declaration of love to free software and open source

When I was starting out as a hobbyist Linux user, I was constantly trying out new or different desktop environments, distributions, and applications. I held a special fascination for old software, and by that I mean software that has a history associated with it. This wasn't just another email server. This was postfix, Wietse's email server. He wrote it while he was at IBM research (why? we don't know, but we're glad he did!), and he supports it. These were the times where man pages, ordered in mysterious chapters that you infixed as a parameter, ruled the day. If there was any doubt that reading man pages was serious business, a well-rounded RTFM was earned in Newsgroups. File extensions were for losers. "FVWM is an extremely powerful ICCCM-compliant multiple virtual desktop window manager for the X Window system" - I had no idea what that meant, but by God, I will install this thing!

In any case, the time was just shortly after books had stopped capitalizing commands (in other words, it was ok to $ cp src dst, it did not have to be $ CP SRC DST). Within the depths of this to me pre-historic cave of nerd-candy land, I found Emacs. To be precise, I found GNU Emacs and XEmacs. And again I was drawn into the history of why both exist, what happened, why and - fascinated as I was from all this - tried both. I moved back and forth for what must have been the 100th time before finally -thankfully!- finding a short article online that stated that XEmacs was not recommended anymore, GNU Emacs was where the development was most active on.

Most of the software I used back then, I moved on from. I am not using FVWM2 anymore, even though that had been the default ever since 1996 for me. Long before it seemed that design and look of software became more important than functionality and utility, there was a class of software that just worked. It didn't look great, was sometimes a little slow or required you to become an expert in its domain-specific language (I encourage you to write a .fvwm2rc one of these days to understand what I mean), but it worked - well. So well, in fact, that even though every once in a while I found myself flirting with the new darling, or more convenient "DE"s (desktop environment, meant to indicate a richer graphical interface experience than a pure "window manager"), eventually I just gave all that good looks up and went back to what worked. Thankfully, I was on Linux. This meant that I would usually re-install my computer once per week anyway. Configuration files, those hard earned settings of mine, were always ready on a special USB stick, that looked empty (they all started with a '.' so by default were hidden). It was my secret cache. Here I was, able to turn any -any- Linux I would ever be confronted with into my own workstation. RCS allowed me to version control the configuration files, once I understood what that meant.

Emacs was one of these tools. I carried around my .emacs file and turned any Emacs I could find into my Emacs. Of course, it absolutely helps that Emacs has gotten just better and better over the last 15 years or since that I've been using it. Org-mode is fantastic and it is so much further ahead of any other note taking/task management/planner / ... that it is simply laughable to think of using essentially anything else for this.

Only once did I have another experience like this. As I went through university, I ended up using Ubuntu quite a lot. But I hated GNOME. Remember, this was a desktop environment. I carried my .fvwm2rc around. But... gosh darn it - I had to admit: my printer configuration was a pain, sometimes I just wanted to insert a CD and have it play, and switching to different networks with my laptop was a non-trivial matter (I used to purchase only laptops with an RJ-45 connector because of my laziness to reconfigure my wireless ethernet). Enter Xfce. I started to use it originally because it -somewhat self-deprecatingly- saw itself as the GNOME environment for those with slow computers. It still used the same GTK widgets and libraries as GNOME and I really liked what Ubuntu was doing with it, so this was my way of trying to ride that wave of contribution while maintaining my self-respect as a geek. Xfce got out of my way. It simply worked, did what I wanted, and never tried to impose itself on me.

Ironically, I loved programming in Qt.

These days, I am using Ubuntu as my main OS, stock. If I ever had any engineering to do, I would immediately switch to Xfce4, use my GNU Emacs, document everything using Org-mode, and have a gazillion terminals open, and a Firefox. I look fondly at new comers into this market who reinvent some of these things, and make them "webby" with a nice app to go along with it. On the road, I dictate practically everything into my phone's assistant. It might understand what I'm saying, but in no way were tasks or notes ever meant to be captured in a phone - it's just not for me.

No one in my family understands this, so my last resort is to commit this to this blog. If you had a similar experience and would like to share, please leave a comment!

Sunday, October 20, 2019


I usually am very open to new technology, even at the cost of some privacy. For example, my home is as smart as I am able to afford it; and if Alexa listens in to conversations, I am generally not too bothered by it, especially since I am sniffing the traffic between my Echo and my router, and generally and on the surface of it, can confirm what Amazon claims with regards to when data is transmitted to the cloud.

But more recently I have taken another look at some of the free services I am using. Gmail especially; and I am not very fond of what I discovered. Turns out, if you don't have to pay for it, you are the product, and while that's been subconsciously obvious to me for a while, I refrained from doing anything about it. Until I discovered ProtonMail and ProtonVPN.

Some time ago, I switched from WhatsApp to Threema, a secure alternative (although I have to say, I'd rather see this fully released as open-source). But I liked the story of how this Swiss company, with the correspondingly strict privacy laws, and their use of end-to-end encryption, personal verification of contact identities, and so forth, is able to provide not only better security, but first and foremost, privacy. Similarly, ProtonMail offers complete privacy to its users, and while there is a free tier, most of the functionality is restricted to their "Plus" offering. If you get it, you also receive a discount on their VPN connectivity. What I like about them is their focus on privacy, without compromising on functionality. I use a PGP key for work, and I was able to integrate it with ProtonMail without a hitch. I love the fact that my emails are stored encrypted and that not even ProtonMail employees can access my data.

Wednesday, October 9, 2019

Kubernetes is a feature not a product: SuSE drops out of OpenStack

SuSE discontinues OpenStack

It's interesting that SuSE decided to get out of the OpenStack business. However, I disagree that they do so because Kubernetes eats its lunch. The notion that Kubernetes would somehow replace OpenStack is comparing apples to oranges, or in this case, comparing a feature with a product.

The ability to manage workloads is a completely different use case than the ability to manage infrastructure. It is no wonder Kubernetes has gotten the larger number of supporters because, simply, there's more developers compared to SREs.

Containers, much like VM images, application packages, or source code repositories, are artifacts that can be characterized by their ephemeral nature, and their delivery mechanism via an active publishing pipeline. They are inherently different to the host operating system, hypervisor management, network configuration or server security. In contrast, Kubernetes helps schedule and maintain a controlled set of containers across a compute cluster, and intersects with infrastructure only inasmuch it connects to software defined (or host-provided) storage.

I should probably point out that I do not consider this in any way shape or form a flaw of Kubernetes; in fact, I would argue that Kubernetes should stay away from infrastructure as much as possible, in order to focus on what it does best: coordinate containers, and interact with infrastructure as a service as a client, not a provider.

It is also true that of course almost anything can be done if one sets ones mind to it, and if you so desire, can make your Kubernetes look very much like an infrastructure management tool. But the argument that "design a tool to do one thing well" so absolutely not lead one to the natural conclusion that Kubernetes management should be done by adding the feature of Kubernetes cluster management to Kubernetes. It might be a great idea, and hey - it might even work, but to me, that argument is inherently flawed.

This is also not an apologist post for OpenStack. OpenStack is flawed, severely so, by its complexity, unnecessary performance issues, and multi-tenancy that ignores years of industry learnings around domains, RBAC and directory management such as with LDAP or AD. But, restricted to the core competency of what OpenStack is supposed to deliver - compute, network, storage, with a little bit of load balancing and DNS sprinkled in - one can't help but note that it does this remarkably well. The stable subset delivering this core functionality can be run, managed and upgraded with relative ease, won't interrupt network connectivity, and represents an excellent alternative to virtual machine management tools such as VMware.

In fact, when one considers that today's workloads should be cloud-native ... wait a second, isn't that what Kubernetes is supposed to deliver? Cloud native applications running containerized on an IaaS?

To me, running Kubernetes on OpenStack is a perfect use case, and a great alternative to proprietary systems. Why throw out the baby with the bathwater?

Ubuntu: from passionate amateurs to competitive advantage

Note: This story was previously published on LinkedIn Pulse. Innovation happens when our creativity is challenged by the immutab...