Author Archive

Let’s get ready to rumble!

There was a time when I would have laughed at the thought of a Microsoft shell going up against a Linux shell. In fact, when I first heard of PowerShell that’s exactly what I did. Given the horror that is cmd.exe, any other reaction is hard to imagine. I think it’s best stated by Matt Wrock at Hurry Up and Wait–“Friends don’t let friends use cmd.exe and you are my friend.” Because of this, PowerShell is often convicted of the same crimes and dismissed, by association, with varying degrees of prejudice. For example, Mike James, one of my favorite authors at, never fails to amuse me with grumpy PowerShell skepticism, but now that PowerShell is open-source and cross-platform, I think the tide may be turning. In fact, I was recently surprised by how easy it to get a working pwsh prompt on a fresh Antergos Linux installation.

pacman -S dotnet-sdk pacaur
pacaur -S powershell-bin

Unfortunately, I could not get PowerShell to work with the integrated terminal in VS Code, but I think problems like these will gradually disappear. There seems to be a lot of momentum in making this a first-class experience.

This leaves Linux users faced with an interesting choice. Like so many other dilemmas in software development, the question is not can we, but should we. For long time .NET programmers, like myself, PowerShell on Linux is an easy sell. For veteran Linux hackers, on the other hand…not so much. There is a major paradigm gap and I honestly don’t know which camp will carry the day. So in the interest of “science”, let’s consider an example for both.

I recently needed to query the file system for a list of unique filenames in a set of search directories. For example, given the following directory structure, I would expect the output to be file1, file2, file3.


This can be accomplished as follows.


Get-ChildItem "parentDirectory\*\searchDir\*" | Select-Object "Name" -Unique


ls parentDirectory\*\searchDir\* | xargs -n 1 basename | uniq -u

I would argue that the PowerShell version is more intuitive and readable although many find the verbosity off-putting and prefer the terseness of Bash. Admittedly, there is beauty in the succinctness of many Bash commands, but it is a common misconception that PowerShell does not have similar capabilities. In many cases you can achieve a high degree of terseness simply by taking advantage of PowerShell’s support for aliases and partial parameter names. Here is the same command expressed tersely. Note that it is actually more compact than the Bash version although I almost always prefer gross verbosity, but hey that’s just me.

ls parentDirectory\*\searchDir\* | select Name -U

Another major difference is that many tasks in Bash involve piping text between various binaries programs. In the above example, four binaries are required to accomplish this task while PowerShell requires only two native cmdlets. Perhaps the most important difference, however, is how pipelines work. In traditional Linux shells, you usually work with binaries that pass text on the standard input and output channels. In PowerShell, you pass around objects in the full object-oriented sense. For example, if I was interested in CreationTime, Extension, IsReadOnly, Length or any of the other numerous FileInfo properties it would be a simple matter to integrate them in my query. Because of this, PowerShell is closer to a functional programming language than a traditional shell where complex queries involving data transforms are a lot harder and, depending on the scenario, usually involve some clever text parsing gymnastics.

Despite PowerShell’s arguable advantages, the traditional way of doing things in Linux is very mature, robust and has an enormous body of documentation around it to help developers become productive quickly. It will certainly be interesting to see how things unfold. I hope to see 12 exciting rounds. Let’s get ready to rumble! 🙂

Agile Estimation

I recently had a conversation with a friend about Scrum and he mentioned a challenge his team was experiencing. They were a fairly young team with less than a year under their belts, but I have frequently encountered the same challenge and wanted to share my thoughts. Essentially, the problem involved an observed discrepancy between the team’s velocity and capacity. To define these terms, velocity is a relative measurement of the amount of work done in a sprint while capacity is the total number of hours a team has to do the work. The discrepancy they observed was that they were only able to allocate about 30% of their capacity before hitting a ceiling with their velocity. In other words, the average amount of work they were able to complete in a sprint, when estimated in hours, was only about 30% of their time. To make matters worse, several members of the team, including the scrum master (who was also a developer), felt that the solution was to allocate more hours and just work harder. The scrum master even went so far as to unilaterally reduce estimates in order to bring in more work during a sprint. Apparently, the team’s management supported this approach and my friend (who constantly works overtime anyway) was very discouraged about the direction things were going with Scrum and I certainly cannot blame him.

In my own experience, excess capacity is not really a problem. It simply means that the team is not able to accurately estimate the hours required to complete a work item. This is usually caused by poor estimation skills (which can only be improved with practice) or the mid-sprint discovery of unplanned work. Teams that are transitioning to Scrum almost always estimate poorly and teams that work with legacy code are often unable to predict the actual work required for any given work item. In situations like these, the best approach, in my opinion, is to establish an average velocity as quickly as possible and use it as a measure of how much work to attempt in a given sprint. Once a team is able to consistently complete their commitments, they can focus on improving their velocity. The metric of capacity is not needed for any of this and to be honest I don’t find it useful for anything.

Unfortunately for my friend, the real challenge his team is facing is organisational. Their scrum master is functioning as a local authority and is empowered to do so by the existing management structure. This is a hierarchical and authoritative mentality common in traditional development organizations and it will not go away willingly. It can take many forms. Sometimes the product owner is the local authority, sometimes it is a senior engineer or even an external manager. Sometimes there are competing authorities. Regardless of the form, the result is a top-down hierarchical arrangement where decisions are made outside of the team and are passed down from authority to authority and the team is expected to comply. On a mature agile team, this organizational structure is inverted. The team is empowered by the management structure to make their own decisions and determine their own direction. The team works together without positions of authority and makes decisions based on consensus.

The entire agile movement is largely premised on the idea that this organisational arrangement is more effective at producing value than the traditional top-down arrangement. Unfortunately, it is rare to see a management structure willing give up top-down control once they have become accustomed to it. Instead, it is common to see situations similar to my friend’s above where an organization will superficially embrace the motions of an agile methodology but not actually embrace the underlying values. For organizations that are not already well established in agile, it is very difficult change. In order for a true transformation to occur, management must genuinely support and value the principles of agile and that requires intelligence, courage and vision–qualities that are rare to find in combination anywhere.

Below is a synopsis of the conversation referenced above.

[Here is the quote I promised to send you regarding your question on working overtime.]

“The Agile mindset views recourse to overtime, other than on an exceptional basis, as detrimental to productivity rather than enhancing it. Overtime tends to mask schedule, management or quality deficiencies; the Agile approach favors exposing these deficiencies as early as possible and remedying their underlying causes, rather than merely treating the symptoms.”

Sustainable Pace

Thank you. I will use this as ammo in our next retrospective when I bring up overtime hours.

Good luck!

Ultimately it doesn’t really matter if the numbers for your estimates are big or small. Their only purpose is to provide a loose projection of when future work might be complete.

So for example let’s say you have a backlog of 100 work items that you want to complete in a Scrum project. The first step would be to look at each item and assign a numeric value to the amount of work involved. It’s just a rough guess, but you want to be as honest as possible. You may have a team full of speedsters that believe they can complete anything in under 2 hours or you may have a timid team of novices that estimate high. It doesn’t really matter. Let me explain…

Imagine there are two teams—team rabbit and team turtle. Both teams are asked to complete the same 100 work items using Scrum. Team rabbit is full of speedsters and they estimate each work item on the low side. When they add up all of their estimates they get the number 100. This estimate may be 100 hours, or 100 points or 100 unicorns…it doesn’t really matter. Only the number itself matters. Let’s use “points” as the unit just for the sake of argument.

At the end of sprint 1 team rabbit has completed 15 points worth of work items. In sprint 2 they complete 23 points and in sprint 3 they complete 17 points. Team rabbit’s average “velocity” is 20 points per sprint ((15 + 23 + 17) / 3 = 60 / 3 = 20) and management can reasonably conclude that they will complete all the work in approximately 5 sprints. If another 100 points of work are added to the project, management can plan for that work to take another 5 sprints to complete.

Team turtle on the other hand is not confident. They estimate each of the 100 work items and come up with a total of 1000 points. At the end of sprint 1 they complete 100 points, then 200 in sprint 2 and finally 150 in sprint 3. Their average velocity is 150 points and management can conclude that it will take them between 6-7 sprints to complete the work. Note that even though their average “velocity” is higher, their actual speed is lower. Average velocity is just a relative number for measuring a teams speed. You can’t really compare the numeric velocity of two teams because each team estimates in a unique way.

So as you can see, the actual estimate number doesn’t really matter. It’s only value is to determine a team’s average velocity which is in turn used to forecast when future work *might* be complete. I’ve found forecasting to be reasonably accurate out to about 3 months assuming the team’s average velocity is stable and they estimate in a consistent way. In my opinion, forecasting isn’t very useful.

Average velocity on the other hand *is* useful. Once you establish an average velocity, you have a pretty good idea of how much work your team can complete in a given sprint. If you commit to more work than what you’ve been able to complete (on average) in the past, you are probably not going to meet your commitment without working overtime and/or taking shortcuts and in the long run you’ll do more harm than good that way.

The best way to use estimation is to be as honest and consistent as possible in order to establish an average velocity. Then once that is established, use your retrospective meetings to come up with ideas to increase average velocity by eliminating bottlenecks and inefficiencies. Fudging the numbers in order to force the team to commit to more work in a given sprint will probably not change the actual amount of work that is completed, but it will certainly lower the quality of work, create resentment within the team, and even further reduce your ability to forecast accurately.

One last thing…sometimes a team will break sprint backlog items into tasks. For example, you might have a backlog item to add a new “report” feature. When you decided that you will work this backlog item in the next sprint, it is common to create tasks for all of the individual steps involved in completing the backlog item. You might have the following tasks.

  1. Create data structures to store the report data
  2. Create the report layout
  3. Create the report export process
  4. Write unit tests
  5. Deploy to QA environment
  6. Perform unit tests

Some teams will estimate these individual tasks in hours and use this value to determine how much work to commit to in a given sprint. For example, if your sprint lasts 2 weeks and you have 4 developers, then you have a capacity of 2*40*4 = 320 hours to spend on tasks in a given sprint. In my experience, this is a very poor way to estimate how much work you should commit to in a sprint. Average velocity is much more realistic. In the 3 years that I did Scrum, we always estimated our tasks in hours but used average velocity to gauge the amount of work to commit to. Most of the time the estimated task hours was 20-40% of our capacity and I worked overtime almost constantly.

Overall, I think agile is mostly about getting into an iterative mindset and working as a team. Check out the video linked below. It’s the best explanation of Scrum that I’ve seen. It might be worth to share it with your team.

The one true variable in our sprint is the average velocity. For the last 7 months, our average velocity is 280, but our hours worked always seems like we only worked two-thirds of a month. So, management keeps saying we need to increase our velocity since our hours worked shows we have extra time. I don’t know if I can get the under-estimators to be realistic since they get praise from management for estimating low numbers.

Agile is a great way for people to get promoted, because they can under-estimate, let other people do the work, and then get praise from management because they push for fewer hours.

It really takes a good team to make Agile work.

Agreed, difficult team members can really make life unpleasant. One thing that I did to increase the number of task hours was add tasks for every little thing…testing, deployment, planning, refactoring. I took a few days and wrote down everything I did to get an idea of what I was spending time on and then I started including it in my task estimates, even if it was just 15 minutes. I found that I was underestimating the time it took to test and debug code. For example, if something took 4 hours to code, I found that took about 4 hours of testing to get all the edge cases worked out, but I was only estimate 30 minutes.

Despite all that, we rarely committed to more than 50% of our team capacity. Fortunately, no one cared. If I ever work on a Scrum team again, I plan to suggest that we not even bother estimating task hours. It’s a waste of time.

Also, I should mention that capacity is typically calculated at 6 hours per team member per day (not 8 hours like I implied last email). So if your sprint is 1 week and you have 10 team member, the capacity is 5 days * 6 hours * 10 team members = 300 hours. When committing to work, most recommend that you not go above 75% of your capacity. But like I said, I think this is a waste of time.

It’s all just a bunch of nonsense anyway. The real trick is having fun despite all the difficult people. 🙂

Burger King explains Net Neutrality

Although I strongly support the ideas of the Libertarian party with respect to minimizing the involvement of government in business and personal matters, I also believe that it is the responsibility of government to protect the rights and freedoms of its people. As a teenager, I remember logging into the Ozarks Regional Information Online Network (ORION) with a dial-up modem and telnet client to learn my first programming language and explore a new world of ideas and information. Since then, the internet has grown in unbelievable ways and the world is better because of it. I believe that Net Neutrality is essential to preserving the freedom that the internet provides to any person of any race, age, color or sex. By removing Net Neutrality, ISP’s are able to legally throttle, censor and monetize the information you access according to what is most profitable. I believe in a world where one is free to learn and explore the wealth of information and resources available on the internet without the interference of profit driven gatekeepers.

For more information, please feel free to check out and participate if you can. In the meantime, enjoy Burger King’s video explanation of the topic and remember the stance your elected officials took next time you vote.

Book Review – Understanding ECMAScript 6

Although I find the idea of a library in the palm of my hand to be very appealing, I still prefer a paper book to an electronic reader. For this reason, I recently found myself browsing the JavaScript section of a local Barnes & Noble where I discovered Understanding ECMAScript 6 by Nikolas C. Zakas. I picked it up, almost reluctantly, and headed back to a couch. I was not expecting it to be very interesting or useful, but after reading only a few pages, even my wife could tell that I was hooked. She purchased the book as a birthday gift and insisted that I not read any more until my actual birthday. After some begging, I got permission to read it early and thoroughly enjoyed every page.

To get the most out of Understanding ECMAScript 6, a good working knowledge of JavaScript is required. If you don’t have any JavaScript background, this book is not for you. Read JavaScript: The Good Parts instead. If, however, you are generally familiar with the core language concepts like object construction, this usage and behavior, functions as object, closures and prototypes, then you will have a difficult time finding a better resource for learning ES6.

This book is very thorough at 352 pages covering all of the changes from ES5 to ES6 (with the additional changes in ES7 covered in an appendix). Despite its moderate size and level of detail, it is well-organized and reads like a much smaller and lighter book. I found myself flying through the chapters even though there is nothing witty or especially engaging with respect to the writing style or content other than the introduction. This may seem like a contradiction but consider the following excerpt on destructuring.

Destructuring for Easier Data Access

Object and array literals are two of the most frequently used notations in JavaScript, and thanks to the popular JSON data format, they’ve become a particularly important part of the language. It’s quite common to define objects and arrays, and then systematically pull out relevant pieces of information from those structures. ECMAScript 6 simplifies this task by adding destructuring, which is the process of breaking a data structure down into smaller parts. This chapter shows you how to harness destructuring for both objects and arrays.

Why is Destructuring Useful?

In ECMAScript 5 and earlier, the need to fetch information from objects and arrays could lead to a lot of code that looks the same, just to get certain data into local variables. For example:

let options = {
repeat: true,
save: false

// extract data from the object
let repeat = options.repeat,
save =;

This code extracts the values of repeat and save from the options object and stores that data in local variables with the same names. While this code looks simple, imagine if you had a large number of variables to assign; you would have to assign them all one by one. And if there was a nested data structure to traverse to find the information instead, you might have to dig through the entire structure just to find one piece of data.

That’s why ECMAScript 6 adds destructuring for both objects and arrays. When you break a data structure into smaller parts, getting the information you need out of it becomes much easier. Many languages implement destructuring with a minimal amount of syntax to make the process simpler to use. The ECMAScript 6 implementation actually makes use of syntax you’re already familiar with: the syntax for object and array literals.

Object Destructuring

Object destructuring syntax uses an object literal on the left side of an assignment operation. For example:

let node = {
type: "Identifier",
name: "foo"

let { type, name } = node;

console.log(type); // "Identifier"
console.log(name); // "foo"

In this code, the value of node.type is stored in a variable called type and the value of is stored in a variable called name. This syntax is the same as the object literal property initializer shorthand introduced in Chapter 4. The identifiers type and name are both declarations of local variables and the properties to read the value from on the node object.

In addition to being clear and concise, this excerpt follows a pattern that is repeated methodically throughout the entire book. The pattern is as follows.

  • Summarize a challenge faced by ES5 developers.
  • Briefly describe the ES6 solution to the challenge.
  • Demonstrate the concept with a simple code example.
  • Explain the code in more detail.

After a few concepts have been introduced, this pattern becomes very familiar and the reader is able to quickly digest even the most challenging concepts. By framing every new ES6 feature in the context of a ES5 problem and then demonstrating it with code, Zakas provides an extremely efficient learning experience that engages the reader in a way that even the most entertaining of technical writers cannot. I especially enjoyed the sections on block bindings, arrow functions, classes, promises and modules and would recommend this book as the perfect companion to JavaScript: The Good Parts.

The Bitcoin Experiment

I recently became interested in Bitcoin and decided to become one of the daring entrepreneurs in the field by establishing myself as a credible merchant. After watching several documentaries on Amazon, taking some PluralSight training and reading up on various sites I bravely generated a Bitcoin address using and printed it out. I was ready for a garage sale.

Unfortunately, the local garage sale community was not so intrepid and declined to spend even a single Satoshi on my junk. In fact, they did not even bother to spend single US cent either. I did, however, manage to spend approximately $3.00 USD worth of BTC performing a test transfer of $1.00 USD worth of BTC to my own Bitcoin address. Excellent! If only I had some venture capital to fund my mostly certain meteoric growth. Oh wait, look! The sidebar now contains a QR encoded Bitcoin address and so does the featured image for this post! How convenient! Feel free to support us with your Bitcoin donations!

All joking aside, Bitcoin and it’s underlying technologies are very interesting and promise to transform the current financial landscape. Stay tuned for a follow up post where I will explain the basic concepts of Bitcoin, the technologies involved and how they work at a high level, and why Bitcoin is something worth understanding.

Book Review – JavaScript: The Good Parts

Last summer I decided to take a more serious look at JavaScript in preparation for some work involving AngularJS. At the time, I regarded JavaScript as a poorly designed language used only by developers with no better option, but the undeniable rise of Node, NPM, Angular and so many other successful JavaScript frameworks had forced me to second guess my assumptions. Reluctantly, I decided to see what the JavaScript hype was all about.

Many years ago, as an intern programmer/analyst, I had read and enjoyed JavaScript: The Definitive Guide and initially planned to read it again to get back up to speed. I was not overly enthusiastic about it’s 1096 page and so was pleasantly surprised to discover that the most highly recommended book on the subject was JavaScript: The Good Parts by Douglas Crockford.

At only 176 pages, it would be easy to conclude that JavaScript: The Good Parts could not possibly do justice to a topic as mature and widespread as JavaScript; yet it does. Crockford explains it best in the very first section of the book.

“In JavaScript, there is a beautiful, elegant, highly expressive language that is buried under a steaming pile of good intentions and blunders. The best nature of JavaScript is so effectively hidden that for many years the prevailing opinion of JavaScript was that it was an unsightly, incompetent toy. My intention here is to expose the goodness in JavaScript, an outstanding, dynamic programming language. JavaScript is a block of marble, and I chip away the features that are not beautiful until the language’s true nature reveals itself. I believe that the elegant subset I carved out is vastly superior to the language as a whole, being more reliable, readable, and maintainable.”

This resonated with my own experience of the language. By eliminating the complexity of the “bad parts”, Crockford is able to present JavaScript in a way that allows the reader to quickly understand how to use the language effectively. No time is spent explaining the awful parts of JavaScript, except how to avoid them and why. Moreover, no time is spent discussing specific libraries or frameworks. Even the DOM is not addressed any more than what is absolutely necessary. This may leave some readers with unanswered questions, but Crockford is laser focused on the language itself and the book is better for it.

Although truly a masterpiece, my only humble criticism is that some of the explanations are arguably too terse and some of the code examples are more advanced than they need to be to illustrate the topic at hand. Do not expect multiple explanations for a single concept or any repetition at all. Expect a terse, no frills, right to the point, explanation with code samples heavily laced with functional-style programming. If that suites you (and it should), then you will enjoy this book.

In the terse spirit of the book, below are outlines of the good, awful, and bad parts, according to Crockford. Notice the proportions.

Good Parts

  • Functions as first class objects
  • Dynamic objects with prototypal inheritance
  • Object literals and array literals.

Awful parts

  • Global variables
  • Scope
  • Semicolon insertion
  • Reserved words
  • Unicode
  • typeof
  • parseInt
  • Floating Point
  • NaN
  • Phony Arrays
  • Falsy Values
  • hasOwnProperty
  • Object

Bad parts

  • ==
  • with Statement
  • eval
  • continue Statement
  • switch Fall Through
  • Block-less Statements
  • Bitwise Operators
  • The function Statement Versus the function Expresssion
  • Typed Wrappers
  • new
  • void

If you work with JavaScript in any capacity, I highly recommend reading this book!

Angular 2 and Plunker

I put together this plunk for a presentation I will be giving about Angular 2 at work. After working with server-side frameworks for so long, the flexibility and power of a client-side framework is truly amazing. It’s hard to believe that it is possible to code up a fully functional Angular application inside a webpage and then embed that application into another website. The web world has truly changed a lot in the last few years and significantly for the better! I only hope the enterprise world can keep up.

.NET Core Bootstrap Script for Linux

A quick script to bootstrap a dotnet core development environment for Linux Mint 18 / Ubuntu 16.04. Installs the following components.

  • .NET Core 1.1
  • Visual Studio Code with C#
  • Node.js and NPM
  • Yeoman


curl -s | sudo bash -s

The Pajama Coder

My apprentice hard at work in her pajamas!  =)

Book Review – Object Oriented Software Construction Second Edition

Object Oriented Software Construction Second Edition by Bertrand Meyer

I discovered this book in 2007 while searching for references on the subject of object oriented programming. Although I knew the basics at the time and had been coding in OO languages for several years, I felt that I was doing it poorly and wanted to take my understanding to the next level. It did not take much time to realize that OOSC2 was generally regarded as one of the best, if not the BEST, book on the topic and so I happily spent an outrageous $78 for a new edition. That was exactly 9 years ago today and the book now sells for $120 brand new.

When it arrived I promptly read the first page, browsed through the chapters and set it aside with the sincere intention of reading it cover to cover “when I had more time.” Months passed, then years. I read many other books and continued to program in OO, but I could not seem to muster the motivation to tackle those 1200+ pages. One day I took a new job and brought this book to the office. One of the senior architects walked by and commented, “that’s one of the best books I’ve ever read.” I knew then that it was time. I cleared my schedule and over the course of many months, inched my way through it cover to cover.

Looking back, I would not recommend this book to anyone wishing to learn or improve their understanding of object oriented programming. Instead, I would recommend Head First Object-Oriented Analysis and Design. Although OOSC2 does explain all of the essential OO concepts in great detail, it reads like an academic thesis full of proofs and theorems. This is because at the time of the writing, OO was a somewhat controversial approach to software development. Meyer’s primary intention was not to make OO understandable, but to prove that OO as an end-to-end software development method was superior to all of the existing alternatives. To this end, many of the explanations and ideas are accompanied by mathematical proofs and notations which, while necessary to the progression of his arguments, only serve to frustrate those seeking to understand OO as quickly and plainly as possible.

Despite the fact that OOSC2 is not, in my opinion, the best book to learn or understand OO (although some would disagree), it is without a doubt one of the most important and influential works in the history of software engineering. As such, I recommend it highly to any person serious about software development. It is a challenging read that will add depth to your view of the craft and force you to grapple with concepts that are often taken for granted in today’s world of pervasive OO such as the superiority of single inheritance, the importance of designing by contract, the value of assertions, type checking and constrained genericity.

I thoroughly enjoyed the journey that is OOSC2 and hope you have the chance to as well!

Return top


Section9 is a computer club based out of the Springfield Missouri area. For more information, please see the About Us page or follow us on Facebook.