Archive for the ‘Et cetera’ Category

Let’s get ready to rumble!

There was a time when I would have laughed at the thought of a Microsoft shell going up against a Linux shell. In fact, when I first heard of PowerShell that’s exactly what I did. Given the horror that is cmd.exe, any other reaction is hard to imagine. I think it’s best stated by Matt Wrock at Hurry Up and Wait–“Friends don’t let friends use cmd.exe and you are my friend.” Because of this, PowerShell is often convicted of the same crimes and dismissed, by association, with varying degrees of prejudice. For example, Mike James, one of my favorite authors at, never fails to amuse me with grumpy PowerShell skepticism, but now that PowerShell is open-source and cross-platform, I think the tide may be turning. In fact, I was recently surprised by how easy it to get a working pwsh prompt on a fresh Antergos Linux installation.

pacman -S dotnet-sdk pacaur
pacaur -S powershell-bin

Unfortunately, I could not get PowerShell to work with the integrated terminal in VS Code, but I think problems like these will gradually disappear. There seems to be a lot of momentum in making this a first-class experience.

This leaves Linux users faced with an interesting choice. Like so many other dilemmas in software development, the question is not can we, but should we. For long time .NET programmers, like myself, PowerShell on Linux is an easy sell. For veteran Linux hackers, on the other hand…not so much. There is a major paradigm gap and I honestly don’t know which camp will carry the day. So in the interest of “science”, let’s consider an example for both.

I recently needed to query the file system for a list of unique filenames in a set of search directories. For example, given the following directory structure, I would expect the output to be file1, file2, file3.


This can be accomplished as follows.


Get-ChildItem "parentDirectory\*\searchDir\*" | Select-Object "Name" -Unique


ls parentDirectory\*\searchDir\* | xargs -n 1 basename | uniq -u

I would argue that the PowerShell version is more intuitive and readable although many find the verbosity off-putting and prefer the terseness of Bash. Admittedly, there is beauty in the succinctness of many Bash commands, but it is a common misconception that PowerShell does not have similar capabilities. In many cases you can achieve a high degree of terseness simply by taking advantage of PowerShell’s support for aliases and partial parameter names. Here is the same command expressed tersely. Note that it is actually more compact than the Bash version although I almost always prefer gross verbosity, but hey that’s just me.

ls parentDirectory\*\searchDir\* | select Name -U

Another major difference is that many tasks in Bash involve piping text between various binaries programs. In the above example, four binaries are required to accomplish this task while PowerShell requires only two native cmdlets. Perhaps the most important difference, however, is how pipelines work. In traditional Linux shells, you usually work with binaries that pass text on the standard input and output channels. In PowerShell, you pass around objects in the full object-oriented sense. For example, if I was interested in CreationTime, Extension, IsReadOnly, Length or any of the other numerous FileInfo properties it would be a simple matter to integrate them in my query. Because of this, PowerShell is closer to a functional programming language than a traditional shell where complex queries involving data transforms are a lot harder and, depending on the scenario, usually involve some clever text parsing gymnastics.

Despite PowerShell’s arguable advantages, the traditional way of doing things in Linux is very mature, robust and has an enormous body of documentation around it to help developers become productive quickly. It will certainly be interesting to see how things unfold. I hope to see 12 exciting rounds. Let’s get ready to rumble! 🙂

Agile Estimation

I recently had a conversation with a friend about Scrum and he mentioned a challenge his team was experiencing. They were a fairly young team with less than a year under their belts, but I have frequently encountered the same challenge and wanted to share my thoughts. Essentially, the problem involved an observed discrepancy between the team’s velocity and capacity. To define these terms, velocity is a relative measurement of the amount of work done in a sprint while capacity is the total number of hours a team has to do the work. The discrepancy they observed was that they were only able to allocate about 30% of their capacity before hitting a ceiling with their velocity. In other words, the average amount of work they were able to complete in a sprint, when estimated in hours, was only about 30% of their time. To make matters worse, several members of the team, including the scrum master (who was also a developer), felt that the solution was to allocate more hours and just work harder. The scrum master even went so far as to unilaterally reduce estimates in order to bring in more work during a sprint. Apparently, the team’s management supported this approach and my friend (who constantly works overtime anyway) was very discouraged about the direction things were going with Scrum and I certainly cannot blame him.

In my own experience, excess capacity is not really a problem. It simply means that the team is not able to accurately estimate the hours required to complete a work item. This is usually caused by poor estimation skills (which can only be improved with practice) or the mid-sprint discovery of unplanned work. Teams that are transitioning to Scrum almost always estimate poorly and teams that work with legacy code are often unable to predict the actual work required for any given work item. In situations like these, the best approach, in my opinion, is to establish an average velocity as quickly as possible and use it as a measure of how much work to attempt in a given sprint. Once a team is able to consistently complete their commitments, they can focus on improving their velocity. The metric of capacity is not needed for any of this and to be honest I don’t find it useful for anything.

Unfortunately for my friend, the real challenge his team is facing is organisational. Their scrum master is functioning as a local authority and is empowered to do so by the existing management structure. This is a hierarchical and authoritative mentality common in traditional development organizations and it will not go away willingly. It can take many forms. Sometimes the product owner is the local authority, sometimes it is a senior engineer or even an external manager. Sometimes there are competing authorities. Regardless of the form, the result is a top-down hierarchical arrangement where decisions are made outside of the team and are passed down from authority to authority and the team is expected to comply. On a mature agile team, this organizational structure is inverted. The team is empowered by the management structure to make their own decisions and determine their own direction. The team works together without positions of authority and makes decisions based on consensus.

The entire agile movement is largely premised on the idea that this organisational arrangement is more effective at producing value than the traditional top-down arrangement. Unfortunately, it is rare to see a management structure willing give up top-down control once they have become accustomed to it. Instead, it is common to see situations similar to my friend’s above where an organization will superficially embrace the motions of an agile methodology but not actually embrace the underlying values. For organizations that are not already well established in agile, it is very difficult change. In order for a true transformation to occur, management must genuinely support and value the principles of agile and that requires intelligence, courage and vision–qualities that are rare to find in combination anywhere.

Below is a synopsis of the conversation referenced above.

[Here is the quote I promised to send you regarding your question on working overtime.]

“The Agile mindset views recourse to overtime, other than on an exceptional basis, as detrimental to productivity rather than enhancing it. Overtime tends to mask schedule, management or quality deficiencies; the Agile approach favors exposing these deficiencies as early as possible and remedying their underlying causes, rather than merely treating the symptoms.”

Sustainable Pace

Thank you. I will use this as ammo in our next retrospective when I bring up overtime hours.

Good luck!

Ultimately it doesn’t really matter if the numbers for your estimates are big or small. Their only purpose is to provide a loose projection of when future work might be complete.

So for example let’s say you have a backlog of 100 work items that you want to complete in a Scrum project. The first step would be to look at each item and assign a numeric value to the amount of work involved. It’s just a rough guess, but you want to be as honest as possible. You may have a team full of speedsters that believe they can complete anything in under 2 hours or you may have a timid team of novices that estimate high. It doesn’t really matter. Let me explain…

Imagine there are two teams—team rabbit and team turtle. Both teams are asked to complete the same 100 work items using Scrum. Team rabbit is full of speedsters and they estimate each work item on the low side. When they add up all of their estimates they get the number 100. This estimate may be 100 hours, or 100 points or 100 unicorns…it doesn’t really matter. Only the number itself matters. Let’s use “points” as the unit just for the sake of argument.

At the end of sprint 1 team rabbit has completed 15 points worth of work items. In sprint 2 they complete 23 points and in sprint 3 they complete 17 points. Team rabbit’s average “velocity” is 20 points per sprint ((15 + 23 + 17) / 3 = 60 / 3 = 20) and management can reasonably conclude that they will complete all the work in approximately 5 sprints. If another 100 points of work are added to the project, management can plan for that work to take another 5 sprints to complete.

Team turtle on the other hand is not confident. They estimate each of the 100 work items and come up with a total of 1000 points. At the end of sprint 1 they complete 100 points, then 200 in sprint 2 and finally 150 in sprint 3. Their average velocity is 150 points and management can conclude that it will take them between 6-7 sprints to complete the work. Note that even though their average “velocity” is higher, their actual speed is lower. Average velocity is just a relative number for measuring a teams speed. You can’t really compare the numeric velocity of two teams because each team estimates in a unique way.

So as you can see, the actual estimate number doesn’t really matter. It’s only value is to determine a team’s average velocity which is in turn used to forecast when future work *might* be complete. I’ve found forecasting to be reasonably accurate out to about 3 months assuming the team’s average velocity is stable and they estimate in a consistent way. In my opinion, forecasting isn’t very useful.

Average velocity on the other hand *is* useful. Once you establish an average velocity, you have a pretty good idea of how much work your team can complete in a given sprint. If you commit to more work than what you’ve been able to complete (on average) in the past, you are probably not going to meet your commitment without working overtime and/or taking shortcuts and in the long run you’ll do more harm than good that way.

The best way to use estimation is to be as honest and consistent as possible in order to establish an average velocity. Then once that is established, use your retrospective meetings to come up with ideas to increase average velocity by eliminating bottlenecks and inefficiencies. Fudging the numbers in order to force the team to commit to more work in a given sprint will probably not change the actual amount of work that is completed, but it will certainly lower the quality of work, create resentment within the team, and even further reduce your ability to forecast accurately.

One last thing…sometimes a team will break sprint backlog items into tasks. For example, you might have a backlog item to add a new “report” feature. When you decided that you will work this backlog item in the next sprint, it is common to create tasks for all of the individual steps involved in completing the backlog item. You might have the following tasks.

  1. Create data structures to store the report data
  2. Create the report layout
  3. Create the report export process
  4. Write unit tests
  5. Deploy to QA environment
  6. Perform unit tests

Some teams will estimate these individual tasks in hours and use this value to determine how much work to commit to in a given sprint. For example, if your sprint lasts 2 weeks and you have 4 developers, then you have a capacity of 2*40*4 = 320 hours to spend on tasks in a given sprint. In my experience, this is a very poor way to estimate how much work you should commit to in a sprint. Average velocity is much more realistic. In the 3 years that I did Scrum, we always estimated our tasks in hours but used average velocity to gauge the amount of work to commit to. Most of the time the estimated task hours was 20-40% of our capacity and I worked overtime almost constantly.

Overall, I think agile is mostly about getting into an iterative mindset and working as a team. Check out the video linked below. It’s the best explanation of Scrum that I’ve seen. It might be worth to share it with your team.

The one true variable in our sprint is the average velocity. For the last 7 months, our average velocity is 280, but our hours worked always seems like we only worked two-thirds of a month. So, management keeps saying we need to increase our velocity since our hours worked shows we have extra time. I don’t know if I can get the under-estimators to be realistic since they get praise from management for estimating low numbers.

Agile is a great way for people to get promoted, because they can under-estimate, let other people do the work, and then get praise from management because they push for fewer hours.

It really takes a good team to make Agile work.

Agreed, difficult team members can really make life unpleasant. One thing that I did to increase the number of task hours was add tasks for every little thing…testing, deployment, planning, refactoring. I took a few days and wrote down everything I did to get an idea of what I was spending time on and then I started including it in my task estimates, even if it was just 15 minutes. I found that I was underestimating the time it took to test and debug code. For example, if something took 4 hours to code, I found that took about 4 hours of testing to get all the edge cases worked out, but I was only estimate 30 minutes.

Despite all that, we rarely committed to more than 50% of our team capacity. Fortunately, no one cared. If I ever work on a Scrum team again, I plan to suggest that we not even bother estimating task hours. It’s a waste of time.

Also, I should mention that capacity is typically calculated at 6 hours per team member per day (not 8 hours like I implied last email). So if your sprint is 1 week and you have 10 team member, the capacity is 5 days * 6 hours * 10 team members = 300 hours. When committing to work, most recommend that you not go above 75% of your capacity. But like I said, I think this is a waste of time.

It’s all just a bunch of nonsense anyway. The real trick is having fun despite all the difficult people. 🙂

Burger King explains Net Neutrality

Although I strongly support the ideas of the Libertarian party with respect to minimizing the involvement of government in business and personal matters, I also believe that it is the responsibility of government to protect the rights and freedoms of its people. As a teenager, I remember logging into the Ozarks Regional Information Online Network (ORION) with a dial-up modem and telnet client to learn my first programming language and explore a new world of ideas and information. Since then, the internet has grown in unbelievable ways and the world is better because of it. I believe that Net Neutrality is essential to preserving the freedom that the internet provides to any person of any race, age, color or sex. By removing Net Neutrality, ISP’s are able to legally throttle, censor and monetize the information you access according to what is most profitable. I believe in a world where one is free to learn and explore the wealth of information and resources available on the internet without the interference of profit driven gatekeepers.

For more information, please feel free to check out and participate if you can. In the meantime, enjoy Burger King’s video explanation of the topic and remember the stance your elected officials took next time you vote.

The Bitcoin Experiment

I recently became interested in Bitcoin and decided to become one of the daring entrepreneurs in the field by establishing myself as a credible merchant. After watching several documentaries on Amazon, taking some PluralSight training and reading up on various sites I bravely generated a Bitcoin address using and printed it out. I was ready for a garage sale.

Unfortunately, the local garage sale community was not so intrepid and declined to spend even a single Satoshi on my junk. In fact, they did not even bother to spend single US cent either. I did, however, manage to spend approximately $3.00 USD worth of BTC performing a test transfer of $1.00 USD worth of BTC to my own Bitcoin address. Excellent! If only I had some venture capital to fund my mostly certain meteoric growth. Oh wait, look! The sidebar now contains a QR encoded Bitcoin address and so does the featured image for this post! How convenient! Feel free to support us with your Bitcoin donations!

All joking aside, Bitcoin and it’s underlying technologies are very interesting and promise to transform the current financial landscape. Stay tuned for a follow up post where I will explain the basic concepts of Bitcoin, the technologies involved and how they work at a high level, and why Bitcoin is something worth understanding.

Angular 2 and Plunker

I put together this plunk for a presentation I will be giving about Angular 2 at work. After working with server-side frameworks for so long, the flexibility and power of a client-side framework is truly amazing. It’s hard to believe that it is possible to code up a fully functional Angular application inside a webpage and then embed that application into another website. The web world has truly changed a lot in the last few years and significantly for the better! I only hope the enterprise world can keep up.

The Pajama Coder

My apprentice hard at work in her pajamas!  =)

Useful OpenSSL Commands

As a software developer, especially when working with security or web-based technologies, one is often required to deal with X.509 certificates. Although it is increasingly easy to obtain a certificate issued by a trusted certificate authority, understanding how to create and use them yourself is invaluable. Below is a list of the OpenSSL commands that have been the most useful to me.

Create a new RSA key:

openssl genrsa -out key_filename.key 2048

The genrsa command generates an random RSA private key consisting of two prime numbers. The size of the key in bits is determined by the last argument (2048 above). Larger values are more secure, but use more resources. NIST estimates that key sizes of 2048 bytes will be secure until 2030. Currently it is possible to crack a key of size 1024 in approximately 100 hours.

Create self-signed X.509 certificate:

openssl req \
    -x509 \
    -sha256 \
    -days 9999 \
    -newkey rsa:2048 \
    -keyout key_filename.key \
    -nodes \
    -out ca_filename.cer

The req command with option -x509 generates a self signed X.509 certificate which can be used for testing or as the root certificate of a certificate authority. Certificates generated in this manner are very useful for development but have limited use in production environments. A common use of self signed certificates is to enable HTTPS on a local or non-production web server.

When used in this manner, the req command will prompt for information to be included in the resulting certificate. When prompted for Common Name, provide the fully qualified domain name to be secured (ex. It is also possible to create a wildcard certificate which can be used to secure unlimited sub-domains by using the * character in place of a concrete sub-domain (ex. * Note that to secure multiple domains with the same certificate (ex.,, the Subject Alternate Name (SAN) extention must be used instead. Using the SAN extension is discussed in the example for the x509 command below.

The -days option determines the number of days (relative to the current date) that the certificate will be valid. The -newkey option generates a new CSR and private key using the specified algorithm. If the -keyout option is used the generated key will be output to the specified file. Option -nodes (no DES) indicates that they generated key should be stored without encryption and can be omitted if you wish to protect the key with a pass phrase.

The -sha256 option indicates that the SHA256 hashing algorithm should be used to generate the message digest when digitally signing the certificate. Note that SHA256 is currently the default and this option can usually be omitted. It is important to use a secure digest algorithm because certificates signed with an insecure message digest such as SHA1 generate warnings in some browsers resulting in sites that clients will not consider trustworthy.

Create certificate signing request:

openssl req \
    -new \
    -sha256 \
    -key key_filename.key \
    -out request_filename.csr

The req command with option -new generates a new certificate signing request containing the information needed by a certificate authority to create a new X.509 certificate. This command prompts for the same certificate information as the example above.

The -key option specifies the key file to use. It is also common to generate a new key as part of this command by using for example -newkey rsa:2048 -keyout key_filename.key -nodes instead of -key key_filename.key. This technique eliminates the need for the use of genrsa as shown in the first example above.

Sign CSR to create new X.509 certificate:

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer

The x509 command with option -req is used to create an X.509 certificate from a certificate signing request. The -CA option specifies the certificate of the signing Certificate Authority and the -CAkey option specifies the private key to use for the digital signature of the resulting certificate. The -CAcreateserial option indicates that a new serial number file should be created if needed and that the serial number of the resulting certificate should be read from this file.

To create a certificate with the Subject Alternate Names extension, add the options -extensions v3_req -extfile san.cnf as shown below.

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer \
    -extensions v3_req \
    -extfile san.cnf

The file specified by -extfile is used to define the domain names to be secured by the certificate and should be in the following format.

subjectAltName = @alt_names
DNS.1 =
DNS.2 = another.domain
DNS.x =

For more information about creating certificates with the SAN extension see

Combine X.509 certificate and key into PKCS12 encoding:

openssl pkcs12 \
    -export \
    -out pkcs12_filename.pfx \
    -inkey key_filename.key \
    -in certificate_filename.cer

The pkcs12 command with the -export option can be used to combine a key and certificate into a single file in PKCS12 format secured by a pass phrase. This is useful when working with Microsoft systems where PKCS12 is commonly used. Note that the file extension used for PKCS12 is typically .pfx on Microsoft systems and .p12 on Linux systems.

References in .NET

I put together the following information while troubleshooting a nasty runtime bug that manifested only after deployment to a production environment. Since then I have returned countless times to clarify my understanding. I am posting it here in the hope that it will help others as much as it has helped me.


In .NET, code is typically compiled into files called assemblies. It is possible for code in one assembly to invoke code in another assembly if a reference is declared. In this way, code from a large variety of sources can be combined and reused. In order for this process to work, each reference must be resolved. Reference resolution is the process of locating the concrete file corresponding to the referenced assembly. It is important to understand that reference resolution occurs at both compile time and at runtime and the process for each is totally different. Failing to understand this point can lead to endless headache. Believe me, I know.

Runtime Reference Resolution (aka binding)

When an application is invoked, it must be loaded into memory. If an application uses objects in another assembly, that assembly must also be loaded into memory. The .NET framework uses the following process to do this.

  • Determine version of referenced assembly.
    • The version of the referenced assembly is written to the applications manifest at compile time. This version will be used unless overridden via configuration.
      • application/web.config
      • Publish Policy (overrides application/web.config)
      • machine.config (overrides Publish Policy and application/web.config)
  • If assembly was previously loaded, then re-use from cache.
  • If strong-name provided, search GAC.
  • Probe
    • If codebase element specified, then use.
      • Binding failure if not found.
      • Binding failure if version, culture, or public key mismatch.
    • Search application base path. Matches by simple name and fails if first match is wrong version.
      • If no culture provided, search root then root/[assembly name]
      • If culture provided, search root/[culture] then root/[culture]/[assembly name].
      • If web/app.config specifies probing element, search paths in privatePath. Paths must be relative to application root.

For more information see

Compile Time Reference Resolution

Compile time resolution occurs in MSBuild during the build process. MSBuild is the build engine used by both Visual Studio and TFS. Note that for ASP.NET applications, there is an extra compile step that occurs for dynamic components (aspx, asc, asax, cshtml, etc) when they are first accessed. Reference resolution for these two scenarios is described below.


Assembly resolution occurs in the ResolveAssemblyReferences MSBuild target. This target invokes the ResolveAssemblyReference task passing the value of the AssemblySearchPaths to the SearchPaths parameter which is assigned a value as follows.

<!-- The SearchPaths property is set to find assemblies in the following order: (1) Files from current project - indicated by {CandidateAssemblyFiles} (2) $(ReferencePath) - the reference path property, which comes from the .USER file. (3) The hintpath from the referenced item itself, indicated by {HintPathFromItem}. (4) The directory of MSBuild's "target" runtime from GetFrameworkPath. The "target" runtime folder is the folder of the runtime that MSBuild is a part of. (5) Registered assembly folders, indicated by {Registry:*,*,*} (6) Legacy registered assembly folders, indicated by {AssemblyFolders} (7) Resolve to the GAC. (8) Treat the reference's Include as if it were a real file name. (9) Look in the application's output folder (like bin\debug) -->

There is a lot going on here and I don’t claim to understand all of it, but I will try to point out the important parts.

  • The most common locations to find a reference are (in search order)
    • Files added manually to project (ex. <project path>/lib/coollib.dll>
    • Location specified by hint path.
    • GAC
    • Application output path.
  • References flagged with Copy Local = true are copied to the application output path *after* compilation. This means that the value of this setting has no impact in the reference resolution process for MSBuild. Note that the Copy Local UI setting maps to the <private> element in the project file.
  • MSBuild will always try to use the latest version available for a given assembly, unless specific version = true is specified. The default value for this setting is false which means that when searching the GAC, the latest version of a DLL will always be used regardless of the version specified in the project definition.

ASP.NET Runtime Compiler

Unless previously compiled into the project output folder using the pre-compile option at build time, all dynamic content (aspx, asc, asax, cshtml, etc.) is compiled once at runtime when the application is first accessed. This dynamic content can also have dependencies on other assemblies. The system.web > compilation > assemblies element is used to tell the ASP.NET runtime compiler about these dependencies so that it can reference them.

The ASP.NET runtime compiler will search the following locations in order for these references.

  • The applications private assembly cache (aka PAC) which is the <app path>/bin folder.
  • GAC (if reference is specified using strong name).

Note that by default, the root web.config references a few system assemblies and all assemblies in the PAC using the <add assembly=”*” /> wildcard syntax. This means that you will rarely ever be required to explicitly add references manually to the system.web > compilation > assemblies element. In many cases you can and should delete the element entirely. It should only contain references to assemblies stored in the GAC. Using Copy Local = true is the recommended approach to include non-GAC references required by the ASP.NET Runtime Compiler.

Also note that many subtle errors can occur if you use the system.web > compilation > assemblies element to specify a specific version number using the assembly’s strong name. The ASP.NET runtime compiler will attempt to compile using the exact version you specify. This can cause problems if the non-dynamic components of the application were compiled against a different version of the assembly during the MSBuild compilation phase. This is often the case because MSBuild will use the latest version it can find and only the exact version if you set specific version = true.

Additional Resources:

Unicode Emoji(!?!)

I had no idea there were Unicode emoji codes that are standardized across platforms! 🙃

There are some fun extensions for Chrome, too. I searched for “unicode emoji” and found a couple that work well, a couple that didn’t. YMMV, so I’m reluctant to recommend one of those. Just try them out. 😎

Support No Support!

Section9 is happy to announce that we have migrated to a new hosting provider called No Support Linux. For just $1/month, they provide a single website running on Apache with 1gb storage, 30gb data transfer, cPanel, SSH to a jailed root, shared hosting on fairly fast servers and no support at all!

Although we will miss Arvixe and IIS, the performance and price of No Support Linux is hard to beat.

Return top


Section9 is a computer club based out of the Springfield Missouri area. For more information, please see the About Us page or follow us on Facebook.