Hello 👋 I'm Dan.

I write PHP, Go, and JavaScript. I also care about web performance.

Getting started with webpack and React, ES6 style

Note: this article I wrote originally appeared on the Humaan blog.

We’re working on a little side project here at Humaan, and I thought it would be the perfect opportunity to try out some new-ish frameworks and build tools. For a while now, React and webpack have been all the rage, and I wanted to learn more about them. I’ve previously dabbled in React, but it was a while ago and many things have changed since then.

This article assumes you have some knowledge of JavaScript, the command line, and have Node.js installed on your system. If you have Node.js installed but haven’t used it for a while, I highly recommend updating it to the latest version.

New beginnings

With ECMAScript 6 (aka ES6 or ECMAScript 2015, hereto referred to as ES6 in this article) becoming a thing, I thought I’d also give Babel a try too. For a few months now I’ve wanted to ditch jQuery from my stack, and start writing native JavaScript. The reason for this is that ES6 has a number of new features that means it’s easier to write vanilla JavaScript, and not have to include a ~30 KB jQuery file for each site. jQuery had been a necessity for many years, especially for normalising interaction between different browsers, but as many browsers vendors are (for the most part), following the proper conventions, there’s less need for jQuery nowadays.

If you’re not familiar with webpack, it’s a similar build tool to Grunt or Gulp, but it’s also a module loader like RequireJS. While you can technically use webpack with Grunt/Gulp/etc, I’ve found I haven’t had any need to. webpack can do everything Grunt or Gulp does, and more! webpack is designed with large sites in mind so the idea is to write your JavaScript as a modularised code base so you can re-use and share code, with the goal of making it easier to maintain projects over time.

A brief summary of the stuff we’re going to use

  • webpack: build tool that uses plugins and loaders to manipulate your code
  • React: a JavaScript library that is very powerful and allows you to build large applications with data that changes over time
  • ES6: new standard for the JavaScript language (think CSS3 or HTML5)
  • Babel: transpiles modern ES6 JavaScript to older ES5 syntax for older browsers to understand
  • Sass: Amazingly powerful CSS extension language that we’re already using on all our sites

A significant part of the development stack was set up by following a great article by Jonathan Petitcolas How-to setup Webpack on an ES6 React Application with SASS? There were a few gotchas in the article, and I also wanted to write my React classes in ES6, so while Jonathan’s article was great to get set up with, it left me with a thirst of wanting to find more about ES6 and React.

In this article we’ll set up webpack to handle our CSS, images, and JavaScript. I’ll touch a little bit on React, and we’ll make a basic React app that’ll say “Hello, <name>!” and will update automatically using the very cool hot module loader. Our React app will also be written in ES6 syntax which will be transpiled to ES5 syntax by Babel.

Getting the basics set up

To get started, create a new folder on your computer, and create a package.json file then chuck the following into it:

Next, open up your command-line application of choice, and change your present working directory to the folder you created earlier. We’ve created our list of packages required, so now we need to install them by entering the following command: npm install. Give it a few minutes while all the packages are downloaded and installed to your local filesystem. Once that’s done, create a HTML file called index.html and enter the following:

As you can see, there’s absolutely nothing to the HTML page, except for the script tag referencing a currently non-existent JavaScript file.

Would the real webpack config file please stand up

Now, let’s create our webpack configuration file! In the same folder, create a JavaScript file called webpack.config.js and enter the following:

As there’s a heap of stuff going on in this configuration file, let’s go through it line by line.


This line tells the built JavaScript file all the possible entry points into the website, and which files should be loaded accordingly. If you're going to split your JavaScript files out and only load them on certain pages, this is more important, otherwise loading all JavaScript into one bigger file will be fine.

output: { publicPath: ‘http://localhost:8080/', filename: ‘build/bundle.js’ }


The output option tells webpack what the name of the compiled JavaScript should be called (and where it should be saved), and you'll notice it's the same path as I specified in the HTML.  Public path isn't necessary if you're loading the file via the filesystem (a la file://), but if you're serving the site through a web server, chances are this'll be necessary (in the case of this demo, it's necessary).

```devtool: 'eval'```

The devtool option determines how the sourcemap should be generated.  Depending on which tool you use to generate sourcemaps, [this value can be different](http://webpack.github.io/docs/configuration.html#devtool), but for us it's just eval.

Next up is the module option, but as there's two parts to it, preLoaders and loaders, I'll do them separately.  preLoaders are used on code before it's transformed, for example, for code hinting tools, or sourcemap tools.  loaders are used for modifying the code, and then you can even have postLoaders to handle things like tests over the generated code.  For further information regarding loaders and their order, I recommend checking out the [loader documentation page](http://webpack.github.io/docs/loaders.html).

preLoaders: [ { test: /.jsx?$/, exclude: /(node_modules|bower_components)/, loader: ‘source-map’ } ],


Using the preLoaders option, we can tell webpack which loaders we want applied to specified files.  In our case, I want to generate sourcemaps on the JavaScript files, so we tell webpack that.  The test option determines which file(s) to be used, and can be written a number of different ways - as you can see, I'm using [RegExp](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp) to find all file names ending in .js or .jsx.  I'm also excluding the `node_modules` and `bower_components` folders as they can be full of hundreds if not thousands of matching files, and I don't want to include them unless I manually import/require them.  Finally, I tell webpack which loader I want to use with the matching files, which is the [source-map-loader](https://github.com/webpack/source-map-loader).

Now, on to our loaders:

loaders: [ { test: /.scss$/, include: /src/, loaders: [ ‘style’, ‘css’, ‘autoprefixer?browsers=last 3 versions’, ‘sass?outputStyle=expanded’ ] }, { test: /.(jpe?g|png|gif|svg)$/i, loaders: [ ‘url?limit=8192’, ‘img’ ] }, { test: /.jsx?$/, exclude: /(node_modules|bower_components)/, loaders: [ ‘react-hot’, ‘babel?stage=0’ ] } ]


Our first loader can look pretty confusing at first, but it's pretty simple really: we're looking for all .scss files nested within the folder "src".  Next up, we have an array of loaders that tell webpack which loaders we want to use, the order of those loaders, and the configuration options to pass on to the libraries themselves.  I find that when using a number of loaders, it's cleaner to use the loaders array, rather than a loader string delimited by exclamation marks.

When specifying loaders using an array, the loader order is from the bottom to the top, so the SCSS files will be compiled with [Sass](http://sass-lang.com/), then [Autoprefixer](https://github.com/postcss/autoprefixer) will work its magic, it'll then be saved as a CSS file, and finally injected into the script with a style tag.

Next, we have options for images included in the JavaScript and CSS.  First, we test for image files, then pass them to the image loader which performs optimisations, and then generates URLs for them.  You'll notice the limit query string of 8192.  With the limit option enabled, all files under that limit will be [base64](https://en.wikipedia.org/wiki/Base64) encoded and stored in our JavaScript bundle.  Any file over that size is left as is and gets linked to via a normal URL.  Very clever stuff!

Finally, our last loader is the JavaScript handler which looks for .js or .jsx files in all folders (excluding node_modules and bower_components), then manipulate them with the Babel transpiler and the React hot module loader.

Lastly, at the bottom of the webpack configuration file we have our function that handles environments and adds extra sources depending on which environment we're in.  By default, the environment is 'development'.

function getEntrySources(sources) { if (process.env.NODE_ENV !== ‘production’) { sources.push(‘webpack-dev-server/client?http://localhost:8080'); sources.push(‘webpack/hot/only-dev-server’); }

return sources;

}


This function enables the webpack development server (DVS) and the hot swap loader to be considered entry-points to the script. If the environment variable `NODE_ENV` isn’t set to production, then add these sources.

### Writing React, ES6 style!

That’s our webpack configuration sorted! It’s time to actually start writing some React and getting our app to build.

We need to build up our folder structure, so jump back to your terminal prompt and enter the following:

```mkdir -p src/{js,css,img}```

This will give you the structure:

src/ css/ js/


In src/css, create a file called `master.scss` and enter the following:



Nothing too special here really, just setting a generic font size and a JPEG image to be a repeating background image. The required JPEG is below:

[![repeater](/uploads/2015/repeater.jpg)](/uploads/2015/repeater.jpg)

Now, lets create the entrypoint JavaScript file for the app!  Go to src/js and create a file called `entry.js` and enter what you see below:



First off, we import the CSS into our entrypoint, then we import the class HelloBox from the HelloBox file, along with React. Finally, we render the page using React, and attach it to the body.

If you tried to compile that file, it wouldn't compile. HelloBox doesn't exist, so it's a bust. Let’s resolve that now, create a file called `HelloBox.js`:



Finally, we'll create _HelloText.js_:



Awesome! We've written all the code we need to right now.

### All the pieces come together

Remember those few scripts that were in `package.json`?  They can act as aliases so you can run common scripts easily without having to remember how to run a certain script each time you go to run it.  I defined two scripts, `webpack-server`, which runs the built-in webpack development server, and `web-server` which runs a basic [Socket.IO](http://socket.io/) web server (so we don't have to deal with the `file://` shortcomings).

My third script is `start` which is simply a wrapper around the two previous server scripts.  By running `npm start`, it boots up both the webpack-dev-server, and the http-server with the one command. By default, the webpack dev server runs on port 8080, and we tell the HTTP server to run on port 3000. If you open up your browser and load [http://localhost:3000](http://localhost:3000) you’ll see:

Hello, Dan! ```

Sweet! Open up your DevTools window to see more information about how the hot module replacement (HMR) and webpack dev server (WDS) tools are working. You’ll also notice that the image loader found that the above JPEG is less than 8192 bytes in size, so in-lined the image as a base64 encoded image.

One of my favourite parts of webpack is the ability to pass the -p flag via the command line to enable a production-ready build of the output file which will perform optimisations like minification and uglification to your code. In our config we check for a NODE_ENV value of production - to do that, simple prefix NODE_ENV=production to your command line calls, like so: NODE_ENV=production webpack -p. Check out the webpack CLI documentation for more details.

We haven’t yet seen the real power of HMR, so split your screen up so you can see both a code editor, and your browser window (preferably with the console open). Jump into HelloBox.js and change the name from Dan to Joe. Hit save and a moment later the text will change from Hello, Dan! to Hello, Joe! HMR is much more powerful than tools like LiveReload as HMR will reload just the updated component, so if you have forms that have content entered in them, that content won’t be lost (as long as it’s not a child of the updated component).

If you get stuck on one of the code examples, check out the Bitbucket repository below to download the entire example codebase.

https://bitbucket.org/humaanco/demo-webpack-react-app

Lessons I've learnt from developing a large Laravel project

For those of you who don’t know me, I’m a huge Laravel fan. I recently finished up a large project for a client, and I learnt many important things along the way. Below, you’ll see features of Laravel I used and loved in Laravel to make the development cycle much more enjoyable (and painless). By no means should you feel like you have to, these are merely just tips I’ve found that helped me.

Middleware is awesome - use it. Middlware allowed me to very easily filter out users who had been added to the portal, but had not activated their account. It also gave me the ability to block access to users who once had an account in this portal, but were now “archived” and no longer able to log in and make changes. Middleware is easily applied/removed to routes as you require, so use them when you can!

Form Request validators can help you save time. The Form Request classes allow you to easily abstract your form submissions into a separate step/file which in my opinion, makes for cleaner code. The portal I developed had a number of different models, and having one validation Form Request for each made it so easily to ensure the data submitted would be validated correctly. Type hinting is amazing.

Home(stead) is where the heart is. Just in case you didn’t know, Homestead is a pre-configured development environment for Vagrant that makes it painless to develop your PHP site with a modern toolset. Technically, Homestead is created for Laravel, but you can use it for any PHP project! I have the VMware Integration for Vagrant which I find to be a fair bit faster, but if you’re just checking Homestead out , the VirtualBox adapter should be more than enough.

Eloquent accessors and mutators are fantastic. Instead of jamming your create/update methods with formatting of different attributes of a model, use mutators and accessors to automatically handle all those little tasks. For example, in a project I needed an attribute to be stored in the database as JSON, but when accessed within the project, it’s automatically formatted to an array by the Eloquent model. When that array is modified throughout the lifetime of the application and then saved back to the database, it’s converted to a JSON string. It can be tempting to include validation in mutators, but I would recommend against doing that. For the most part, use it as a way to convert or format one thing to another.

Forge and Envoyer are a match made in heaven. When it came to deploying this project, I had the opportunity to deploy to a VPS, rather than a bog-standard Apache web host. I decided to deploy to DigitalOcean using Forge then continuosly deploy using Envoyer. The initial configuration of Forge + Envoyer was a bit confusing, but luckily Laracasts came to the rescue. After I got my head around how the two worked together, all updates I’ve done to the project since have been easily deployed with virtually no hassle at all.

Finally,

Testing… 1, 2, 3, testing… In Laravel 5.0.x, I was using Jeffrey Way’s Integrated testing utility, which got merged into Laravel core in 5.1. The aforementioned testing utility allows one to write tests in a more self-explanatory way. What that means is rather than getting down and dirty with PHPUnit, you can write tests like $this->visit('/')->see('Hello!'); which reads much better.

By the way, if you’re interested in signing up with DigitalOcean, please use my referral link. I scratch your back, you scratch mine :)

HTTP/2 and You

Note: this article I wrote originally appeared on the Humaan blog.

HTTP/2 and You

This article gets pretty techy! If that doesn’t sound like your bag, here’s a quick summary: the HTTP network protocol has existed since the early days of the web, and it’s about to be succeeded by HTTP/2 which will make communications between servers and browsers more efficient. It also means we need to change the way we optimise our websites to take advantage of the technology, so we don’t work against it.

The dawn of HTTP/2 is upon us. Since 1999, we’ve been using the Hypertext Transfer Protocol version 1.1 which isn’t particularly efficient. After many years of debating, HTTP/2 has been standardised, approved, and is now on its way to a browser near you. Before we see what HTTP/2 brings to the table, we should have a look at how it came to be.

A brief history

The year is 2009. Google, not satisfied with the speed of the web, developed an internal project that was known at SPDY. SPDY (not an acronym, pronounced ‘speedy’) aimed to reduce page load time by multiplexing resources through one connection between server and client, rather than having to open a new connection for each resource required to load the page.

By early 2011, SPDY was running on all Google services and happily serving to millions of users. A year later, Facebook and Twitter both implemented SPDY on the majority of their services (and within 12 months, SPDY was on all their services).

Almost three years after the initial draft, HTTP/2, née HTTP 2.0 was approved as a standard by the Internet Engineering Steering Group (IESG) and proudly bears the moniker RFC 7540. Mum and dad would be so proud. Now that we know the history behind how HTTP/2 came to fruition, what does it actually mean for you and me?

Getting ready for HTTP/2

Thankfully, there’s no deadline for having to have HTTP/2 enabled on your server, and it’s fully backwards compatible with HTTP 1.1 clients. A request from a client to a server will specify an Upgrade header with the value h2 or h2c. If this header/token combination isn’t in the request, it’s safe to say they don’t support HTTP/2, so serve them content over HTTP 1.1. Otherwise, if the client does support HTTP/2, the connection is upgraded and settings from the header HTTP2-Settings are processed by the server.

For security and privacy reasons, SPDY was built with SSL as a requirement (that’s right, SPDY wouldn’t serve content in cleartext). When SPDY was first drafted into HTTP/2, a requirement for serving HTTP/2 over Transport Layer Security (TLS) 1.2 was requested. Given the significant hassles required to get SSL certificates configured correctly, not to mention the costs associated with purchasing and maintaining said certificate, this requirement was finally dropped and HTTP/2 can serve content over cleartext (when the Upgrade token h2c is specified). c for cleartext. Got it?

_On a side note, the Let’s Encrypt project by the Internet Security Research Group (ISRG) that’s launching in September 2015 aims to take the pain and cost out of SSL certificates. I personally can’t wait until the project is publicly available to everyone in a few months._

Support for HTTP/2 is relatively good at this stage, given that the standard was only approved several months ago. All the common browsers have support for HTTP/2, or will have support for it in the coming months. Chrome, Firefox, and Internet Explorer currently only support HTTP/2 over TLS (so no cleartext HTTP/2 until full implementations are completed). Server wise, IIS supports HTTP/2 in the Windows 10 beta, Apache supports HTTP/2 with mod_h2 (and little hacks), but this should be improved soon. Nginx is still working on a module, which should be released by the end of 2015.

HTTP/2, like SPDY, has the ability to keep the connection between the client and server open, continuously send data upstream and downstream without having to open a new connection for each request. This means there’s no ping pong match between the client and the server – it just goes through the existing open connection.

A change in process

To work around the overhead required for each resource to be downloaded, practises like image spriting and resource concatenation have become the de facto way to improve your site’s performance. Ironically, this practise will actually be harmful to websites that serve to HTTP/2 compatible clients.

Which means… no more concatenating! When browsing to your site, you can safely serve the core CSS required, and another CSS file required to render page-specific contents. This means that if the user only visits one page, they’ve only downloaded the core styles and styles specific to the page. Depending on the size of your CSS, you could be saving tens or hundreds of kilobytes in each request. Phenomenal stuff when you think about it.

With the lower cost of sending data to the client, this doesn’t mean you can go back to serving uncompressed JPEGs, PNGs and the like. You still need to be smart about what assets are sent to the client, along with the size of those assets. JavaScript module loaders like RequireJS will more than likely see another rise in popularity as setting up the r.js optimiser/combiner tool will not be required.

Ready for prime time?

Depending on the timeliness of full HTTP/2 support by browser and server vendors, we’re hopeful that by early 2016 the Humaan dev team will be developing with the HTTP/2-first mindset.

While the HTTP/2 standard isn’t perfect, it’s a heck of a lot better than HTTP 1.1 in many ways, the future for fast websites looks rather bright indeed. To learn more about HTTP/2, I highly recommend reading “http2 explained” by Daniel Stenberg (the genius behind cURL).

Contempo — A JSON Resume Theme

I recently started using JSON Resume to make it easier to update my résumé and I found that I didn’t really like any of the themes that were available. So, I did what any person who had a few hours free would do and developed my own… drumroll

Introducing Contempo

Now that was pretty anticlimactic. I’ve had the same résumé design for a few years now. I’ve loved its simple design, the lack of colour, and the clean layout. When it came to updating it however, it was a complete nuisance. All the horizontal rules in the template had to many adjusted each time I added a new line - to put it plainly, I hated updating my résumé! Something had to be done.

After fluffing around with Handlebars and CSS for a few hours (I’ve never used Handlebars before, I do quite like it though) I hacked together a theme that resembled my original Pages résumé document. The result is pretty close to my old résumé, my only issue is that there doesn’t appear to be a way to do footers reliably.

If you already have a published JSON Resume file you can do http://registry.jsonresume.org/yourusername?theme=contempo to preview the Contempo theme. Alternatively, cd into your JSON Resume directory then publish with resume publish --theme contempo. Last resort, if you don’t have a JSON Resume résumé, you can see my hosted résumé on JSON Resume.

Calcbot for iOS 8 Released

The best calculator for iOS has been updated for iOS 8. Calcbot, by Tapbots, has been given a fantastic re-design and Convertbot has been merged into Calcbot (requires a small In-App Purchase.) As much as I miss the awesome skeuomorphic design of the original Tapbots style, the new design is very clean and fluid. Now I just wish I could override the Control Center shortcut to open in Calcbot now, rather than the standard Calculator app.

Jump on over to the App Store to get your copy today!

Using X-XSRF-TOKEN HTTP Headers for AJAX in Laravel 5 (Updated)

Update (24/02/2015): Laravel 5.0.6 has been updated to support cleartext X-XSRF-TOKENs. As explained in the recent post CSRF Protection in Laravel explained by Barry vd. Heuvel, Laravel can now process X-XSRF-TOKENs if they are transmitted in cleartext. Some would argue it’s still better to encrypt the CSRF token, but that’s for much smarter InfoSec people than me.

The following article was written for Laravel 5.0.5 in mind, but is still relevant as of 5.0.6

If you’ve recently started using Laravel 5 and are trying to use csrf_token() with the header X-XSRF-TOKEN with your AJAX requests, you’ll notice that you get a HTTP Error code 500, rather than a 200 OK response. This is because the CSRF middleware is expecting the csrf_token via X-XSRF-TOKEN to be encrypted - Something the Laravel documentation doesn’t make clear.

When I originally stumbled across this issue I thought it was a bug in Laravel and submitted a PR (which turned out to be a bad, naughty, terrible, not so good thing to do - in short, I should learn to search.) Regardless, we have two ways of getting around this. Our first way is to just encrypt the damn CSRF token and use that in our code, or alter the middleware to not perform decryption on the CSRF Token.

Option 1 - Encrypted CSRF Token

Our first option is to encrypt the CSRF token. As you may already know, you can access the CSRF token by using the function csrf_token. Load up your routes.php file so we can add the encrypted token to the views.

For each view you call, you’ll need to append this method:

withEncryptedCsrfToken(Crypt::encrypt(csrf_token()));

So, if you were calling a view for the home template, you’d do this:

view('home')->withEncryptedCsrfToken(Crypt::encrypt(csrf_token()));

Terrific. In that template you can access the variable like below:

<meta name="csrf_token" value="{{ $encrypted_csrf_token }}" />

Chuck that in your main view in the <head> so your JavaScript framework of choice can gobble it up. Just make sure to do use Crypt; if you’re in a different namespace.

Option 2 - Non-encrypted CSRF Token

Our second option is to alter the VerifyCsrfToken middleware to not expect an encrypted CSRF Token when transmitted via a HTTP Header.

Open up the VerifyCsrfToken.php middleware (located at app/Http/Middleware/) and we’ll extend the method tokensMatch.

protected function tokensMatch($request)
{
    $token = $request->session()->token();

    $header = $request->header('X-XSRF-TOKEN');

    return StringUtils::equals($token, $request->input('_token')) ||
        ($header && StringUtils::equals($token, $header));
}

Essentially, what I’ve done is copied the method from Illuminate/Foundation/Http/Middleware/VerifyCsrfToken.php then removed the call to $this->encrypter. You’ll also need to add a use at the top of VerifyCsrfToken.php like so:

use Symfony\Component\Security\Core\Util\StringUtils;

Once you’ve done that, you can safely use plain old csrf_token in your X-XSRF-TOKEN header and get 200 - OK with all your AJAX calls. If you didn’t quite figure out the middleware alteration, load up this Gist to see how I modified the VerifyCsrfToken middleware.

Implementing in jQuery

If you happen to be using jQuery with Laravel, here’s how you can add the HTTP Header to your AJAX requests. As usual, there’s a few different options. If you’re doing a heap of requests over the lifetime of the session, you’ll want to set this token for all AJAX requests. If not, you can do it inline with the AJAX call.

First up, the pre-filter to make this global for all $.ajax requests:

$.ajaxPrefilter(function(options, originalOptions, xhr) {
    var token = $('meta[name="csrf_token"]').attr('content');

    if (token) {
        return xhr.setRequestHeader('X-XSRF-TOKEN', token);
    }
});

Good stuff. Now, all $.ajax requests in the application lifecycle will use that prefilter with the token and HTTP Header.

If you just need the HTTP Header for one or two requests, here’s how you can add it to the $.ajax call:

$.ajax({
    url: 'http://example.com/api',
    beforeSend: function (xhr) {
        var token = $('meta[name="csrf_token"]').attr('content');

        if (token) {
            return xhr.setRequestHeader('X-XSRF-TOKEN', token);
        }
},
/* ... */
});

That “pre-filter” will be in effect for that $.ajax call only.

Moving Forward

Now, it’s entirely up to you how to proceed. Just to be safe, I’ve decided to go with Option 1 because I want to err on the side of caution, but if your Laravel 5 app is super simple and can’t do much/any harm, I think it’s OK for your CSRF Token to match in a string-to-string comparison, but not be valid JSON.

Only time will tell.

Updates:

  • 19/02/2015: — I originally had the CSRF Token encrypted in the boot method of the AppServiceProvider. This was incorrect as csrf_token isn’t set unless it’s called from within a Route. My mistake!
  • 24/02/2015: — Updated with comments about Laravel 5.0.6 now supporting cleartext X-XSRF-TOKENs.

Changing Times

These last few months have seen many changes happen in my life. I’ve moved away from the Apple Systems Engineer role/world and transitioned into a Web Developer role here in Perth. I’ve moved state, proposed to my girlfriend (now fiancée), and bought a house (we’re also getting a dog).

This year saw me launch a fun project that I had been working on for a few months, a database for the best podcast in the world, Stop Podcasting Yourself. The website, SpyDB, is built on Laravel, my favourite PHP framework. This is my biggest project I’ve used Laravel on, and it was a blast to see just how easy it was to make something quite functional with relatively little past experience in Laravel.

The OS X Mavericks Server articles I wrote last year have continued to be a huge hit, but unfortunately there will be no OS X Server tutorials this year. I had planned (and started writing) a OS X 10.10 Yosemite Server eBook that was going to be distributed through Leanpub, but I’m just not able to dedicate time to writing about OS X Server, especially with paying customers. It wouldn’t be fair on the buyers of an incomplete book to wait months for a potentially completed book. I’m just not that kind of guy.

The content of this website will likely change to a more Web Developer aspect, but I’m still playing around with OS X Server so no doubt I’ll blog about some issues I come across.

Many thanks for reading, I hope you have a good rest of your day.

Dan xx

Learning Nagios 4 - A Book Review

As a longtime user of GroundWork, I’ve always had an abstraction layer between me and Nagios. I’d always thought that having a better understanding of the internals of GroundWork would make it easier for me to use, but I didn’t take the opportunity to learn about Nagios until now.

The book, Learning Nagios 4, by Wojciech Kocjan, weighs in at 400 pages and is the second edition. I found the book to be very well written, and it contained a lot of good technical information that I thought was interesting and beneficial.

Chapter 1 introduces Nagios to the unfamiliar user, and Wojciech gives good examples that ensure system administrators that Nagios is suitable for them. can provide IT staff with a very good system to check infrastructure and software to ensure it’s working correctly.

Chapter 2 runs through installing and configuring Nagios. I was very pleased to see a book providing instructions on installing software from source, as it’s rather unusual in my experience to find books that don’t just provide installation by package manager. Going through common Nagios configurations was also interesting, as I learnt a few quirks about templates and precedence.

Chapter 3 is all about the web interface that compliments Nagios. As a user of Nagios by proxy through GroundWork I was a little shocked at the Web GUI and how different it was to the interface I was used to, but it is nice to see Nagios 4 has implemented PHP support so there’s a bigger avenue for theme customisation.

Chapter 4 talks about the basic plugins that are provided with Nagios. If you’re a follower of my blog you would’ve seen my Nagios plugins for OS X Server, some of which were co-authored with/by my friend Jedda Wignall. I learnt quite a bit about the inbuilt plugins that come with Nagios, including the plugins that can schedule package manager checks - very cool!

Chapter 5 discusses advanced configuration details, mainly about templates and the nuances to inheritance, along with describing what flapping actually is. I thought the section on using multiple configurations (like OS type, location etc) to generate a configuration for a specific machine was quite interesting, and would allow the user to create advanced host settings with relative ease.

Chapter 6 was a chapter that I found very interesting as it focused on alerts and contacts. As a former member of a very small team we were inundated by emails every day and it became hard to keep track of what was coming in. The authors example of constant email flooding was exactly what happened to us. It’s worth spending a bit more time setting up proper alerts to make sure the right information reaches the right people, rather than spamming everyone constantly.

Chapter 7 talks about passive checks, and how they compare to the normal active checks. NCSA, or the Nagios Service Check Acceptor is also discussed, which is a daemon on the client end that can send check results back to the monitoring service securely. I’ve not used either types of passive checks, so learning about them was quite interesting. I’m looking forward to putting them into good use some time.

Chapter 8 contains a ton of great information and detail about the remote server checks performed by SSH, and the Nagios Remote Plugin Executor (NRPE). The author provides good arguments for choosing either of the services, depending on your requirements. I hadn’t actually heard of NRPE before, but it looks to be quite powerful without the overhead of SSH connections by the host.

Chapter 9 is all about SNMP and how it can interact with Nagios. In past experience I’ve only ever had bash scripts to process SNMP responses, but now I know how to implement it properly into Nagios without having a conduit processing script. I also never really knew much about SNMP, so it was good to learn about what SNMP actually is, not just how to interact with it, which can be an issue in some technical books where interacting is explained, but the source/destination isn’t.

Chapter 10 starts off by covering getting Nagios working with Windows clients, which to me isn’t very applicable as I’m purely a Linux/Unix/OS X man myself so my eyes glazed over as I pushed through that section. Having said that, it’s good to know Nagios monitoring is fully supported in Windows with the appropriate software installed. Another concept that is looked at in Chapter 10 is the setup and configuration of a multi-server master/slave setup with multiple Nagios servers. Now, unfortunately (or fortunately, depending on which way you look at it) I’ve not been in a position where I’ve needed to have multiple Nagios servers performing checks, but it’s useful to know that it’s possible, and to have some instructions on getting it set up.

Chapter 11 is probably my favourite chapter of the book because it’s all about programming Nagios plugins. The book has a multitude of examples written in different languages. I’ve always done my scripts in Bash, but had never even thought of writing plugins in PHP, which is my strongest language. Having seen code for a few languages (like Tcl) that I’ve heard of but not used, this book has encouraged me to try other languages for Nagios plugins, and not limit myself to Bash.

Chapter 12, the final chapter, talks about the query handler which is used for two-way communications with Nagios. There’s also a section on Nagios Event Radio Dispatcher (or NERD) which can be used for a real-time notification system for alerts.

Overall, I would highly recommend this book to any sysadmins looking to implement an excellent monitoring solution that is easy to set up, yet powerful enough through its extensive plugin collection and flexibility. After reading this book I’ve come away with a stronger knowledge of Nagios that will benefit my work in the future.

Note: I was provided with a free eBook to review this book, however, this review is 100% genuine and contains my true thoughts about the book.

Donations Now Accepted

For a very, very long time I’ve put off accepting donations to the site, but with Stripe accepting me into their Australian beta, I’ve decided to sign up and see how it goes.

If any of my articles have helped you out and if you’ve got some coin to spare, help a fellow dev/sysadmin out and donate a caffeinated drink or two! Many thanks.

Donate to yes > /dev/null

Installing the APR-based Tomcat Native Library for Jira on Debian 7 (Updated)

If you’re self-hosting Jira on Debian (or another platform, but this post is Debian specific) you might notice in the catalina.log file a line that reports something like this:

INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

While this message isn’t a bug per se, but if you’re a completist like me, you’ll want to get it installed to speed up performance for Jira.

First up, you’ll want to install Apache Portable Runtime library along with OpenSSL, if you want to use SSL. Using the command line, we’ll install them with apt-get:

apt-get install libapr1-dev libssl-dev

Go through the process of installing the libraries by following the apt prompts. Note that the version of OpenSSL installed is 1.0.1e, but is patched for CVE-2014-0160 aka Heartbleed).

Next, we’ve got to grab the tomcat native library source tarball (tomcat-native.tar.gz) from Jira so we can compile from source. If you installed Jira from the .bin installer, Jira should be installed at /opt/atlassian/jira. From there, you’ll find the tomcat-native.tar.gz in the bin/ folder (full path: /opt/atlassian/jira/bin). To keep the bin directory clean, we’ll extract the tarball to the home folder of the logged in user:

tar -xzvf tomcat-native.tar.gz -C ~

Now if you cd into the home directory (cd then ls -la) you’ll find a directory called tomcat-native-1.1.29-src. We’ll need to locate the configure script that is some where in that folder. Thankfully, I’ve already done the research for you and it’s in jni/native. cd to that location.

Now that we’ve got the configure script, we need to run the script with the appropriate configuration locations and files. At minimum, you should provide the APR location (--with-apr) and the JVM location (--with-java-home). If you installed APR with apt, the APR config binary will be in /usr/bin/apr-1-config.

Next, the JRE/JDK location. Jira comes with a bundled JRE, but it’s not sufficient for the Tomcat Native Library configurator. If you use the Jira JRE path of /opt/atlassian/jira/jre you’ll get the error checking os_type directory... Cannot find jni_md.h in /opt/atlassian/jira/jre when you run the configuration script. To rectify this, you’ll need the OpenJDK JRE for Linux. Once again, using apt we can install it with ease: apt-get install openjdk-7-jre. After that has completed, if you do find / -name "jni_md.h" you should get something like this:

/usr/lib/jvm/java-7-openjdk-amd64/include/jni_md.h
/usr/lib/jvm/java-7-openjdk-amd64/include/linux/jni_md.h

You can use either file, as a diff shows that both files are the same. With that in mind, our configuration variable --with-java-home can be set to /usr/lib/jvm/java-7-openjdk-amd64. Finally, with SSL support, you can safely set --with-ssl to yes as the configurator can guess the OpenSSL settings. With all that in mind, our final configure string will be as follows:

./configure --with-apr=/usr/bin/apr-1-config --with-java-home=/usr/lib/jvm/java-7-openjdk-amd64 --with-ssl=yes

Hit return and let the configure script finish. Now you can finish it off by making and installing by doing the following: make && make install. As we did not specify an installation prefix, the compiled library will be installed at /usr/local/apr/lib. Jira expects the library to be in one of these folders: /usr/java/packages/lib/amd64, /usr/lib64, /lib64, /lib or /usr/lib. I’m just going to copy them to /usr/java/packages/lib/amd64, do that like so:

cp /usr/lib/* /usr/java/packages/lib/amd64

Now you’re done. Simply start Jira using your prefered method and you should find that APR is being loaded correctly. You can verify this by doing cat /opt/atlassian/jira/logs/catalina.out | grep -A 1 "AprLifecycleListener" and you should see something like this:

Jun 02, 2014 4:23:11 PM org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.29 using APR version 1.4.6.
Jun 02, 2014 4:23:11 PM org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
Jun 02, 2014 4:23:11 PM org.apache.catalina.core.AprLifecycleListener initializeSSL
INFO: OpenSSL successfully initialized (OpenSSL 1.0.1e 11 Feb 2013)

Hooray! Jira is running, and it’s loading the Tomcat Native Library with APR successfully!

Updates

  • 27/09/2014: updated with newest package name and cp path.