There’s this craving out there in the industry. Imagine walking around with super powerful device inside your pocket. You can do all sorts of cool things on it like browsing the Internet, taking amazing pictures, listening to the music, downloading apps, documents and what not. Not that hard to imagine these days, most of the smartphones out there can do of the above and then some. Most of them can and does replace multiple devices we needed in the, not that distant, past (walkmans 1, iPods, calculators, cameras etc.). But the craving is still there. Common understanding is that these devices are so powerful nowadays, that they could take on doing even more. Imagine — the last time, I promise! — walking around with super powerful device inside your pocket. Imagine you get back home or arrive to the office, you bring the device out of your pocket and you connect it to the big screen, pointing device and a keyboard. All of a sudden, your pocket device became your desktop device. Bam! 🤯
I was working for at&t when I first learned about SmartOS. Reason was simple, I’ve been in UNIX/Solaris team so I was more in “this world” back then. I found the concept of this new OS fascinating and it was additionally sprinkled by Bryan Cantrill’s amazing lightning talk. It’s one of those moments when I felt that I’m being interested in the right things at the right time. To me SmartOS in many ways felt revolutionary and I still think that some of its concepts1 are ahead of the industry.
My first encounter of ZFS happened on Solaris 10 running on some SPARC box. It felt very refreshing after SVM or, goodness me, VxVM. I became a fan instantly as the overall simplicity in administration and promise of reliability were nowhere else to be found.1 Throughout the years I’ve been playing with it mainly on Illumos distros (OmniOS and SmartOS) and FreeBSD, but never got myself to entrust it fully on Linux.
While it appears that ZFS is still sort of persona non grata at least in the Linux kernel, but with Canonical shipping it by default with Ubuntu helps a lot. Quite recently it also became apparent that the community behind ZFS on Linux is the largest and most active one. It also seems that both Illumos and FreeBSD (among others) are going to be syncing with/against it.
This release brings impressive set of new features. I for one am the most excited about the native encryption and possibility to transfer raw encrypted snapshots.
On 16th of May new major release of Ansible has landed. For a very long time I was a proponent and happy user of SaltStack. I still have a soft spot for it and some formulas lying here and there. At some point, however, I gave Ansible a chance and, while it was not exactly trouble-free (I had quite a few habits from Salt), once it clicked, it stayed and is my number one automation tool period.
It’s a huge release, so many things are mentioned in the release notes that it wouldn’t make sense to go through all of them here and now. That said, there’s one thing that I was really looking forward to: python interpreter discovery. It’s surprising how annoying this one can be in a mixed distro/version environment. Finally no need for some hacky solutions! 🎉
UPDATE (26/04/2019): this post has been updated to include latest changes made to the project. You can jump directly to it here →
I was lucky enough that in relatively early time in my career I bet on NGINX as my default HTTP server and essentially never looked back. Sure enough, I started with using it as reverse-proxy in front of Apache, but once it matured enough and I felt confident it can be trusted with essentially any HTTP-related task, I switched entirely. It was a long time ago and NGINX has made some tremendous progress since then. While its adoption didn’t exceed Apache so far, it’s in the second place for quite some time now and growing in numbers each month. I was always fond of it being so lightweight and I preferred usage of FastCGI protocol instead of native/built-in one as it was the case with Apache at that time.
The caveat is that, while being Open Source application, there are some functionalities that are available only for the paying customers (NGINX Plus). I don’t mind this kind of business model. After all, this is a great application and I hope it will stay around for years to come and the only way to achieve that goal is to keep it sustainable, financially-wise. On the other hand, I’m not able to afford NGINX Plus subscription model (especially for private use-case like mine). Fortunately enough, there are some NGINX enthusiasts out there that are creating 3rd party modules for their favourite HTTP server. Quite a few of them.
There are so many ways these days to start a local VM on the Mac that adding yet another one seems insane. And yet, Multipass from Canonical1 appeals to me the most. Especially when it’s just a quick check that I need to make — it’s as simple as two commands and voilà, it’s working and ready to roll.
With the new version I really am looking forward for starting automatically the instance via
multipass shell command — it’s safe to say that it was the only thing I was missing.
Please note that Multipass works solely with Ubuntu instances. It is cross-platform however and one can use it on Windows, macOS and Linux.
Last time I mentioned that I was working on a central syslog. Part of the task was also possibility to easily go through the logs, preferably with some filtering and what not. ELK-stack is usually the first thing mentioned as a potential solution. Essentially the goal is to land your logs in Elasticsearch. The problem with both of these solutions is on the processing part. With Logstash things can go very wrong very quickly and there’s only handful of other things than
_grokparsefailure that can seriously put me into rage mode.
Grafana is one of my favourite Open Source software of all time. I’ve been using it for years and am thrilled to see yet another great major release. I’m really looking forward to put my hands on all new workflow called Explore. Currently it integrates with Loki, but support for Elasticsearch is already on the roadmap!
Recently I’ve been tasked with creating a central syslog server. These are very useful when one maintain couple of boxes (or couple hundred and more) as it can provide a single point of checking out on what’s up with the machines. If it’s combined properly with metrics it serves as a super-boosting way of maintaining the overview of the entire infrastructure.
When it comes to nginx, it defaults to storing log files in plain text. It’s a sane default and I don’t see a good reason to ship it in any other fashion. However, sometimes the needs change. It was the case for me — I’m using rsyslog1 for all of the OS logs and it felt natural to me to have nginx invited to join the party. As rsyslog client is pushing all of its logs further to the centralized server part already, I wanted to have nginx logs included in the stream.