Author Archive

Book Review – JavaScript: The Good Parts

JavaScript: The Good Parts by Douglas Crockford

Last summer I decided to take a more serious look at JavaScript in preparation for some work involving AngularJS. At the time, I regarded JavaScript as a poorly designed language used only by developers with no better option, but the undeniable rise of Node, NPM, Angular and so many other successful JavaScript frameworks had forced me to second guess my assumptions. Reluctantly, I decided to see what the JavaScript hype was all about.

Many years ago, as an intern programmer/analyst, I had read and enjoyed JavaScript: The Definitive Guide and initially planned to read it again to get back up to speed. I was not overly enthusiastic about it’s 1096 page and so was pleasantly surprised to discover that the most highly recommended book on the subject was Douglas Crockford’s JavaScript: The Good Parts.

At only 176 pages, it would be easy to conclude that JavaScript: The Good Parts could not possibly do justice to a topic as mature and widespread as JavaScript; yet it does. Crockford explains it best in the very first section of the book.

“In JavaScript, there is a beautiful, elegant, highly expressive language that is buried under a steaming pile of good intentions and blunders. The best nature of JavaScript is so effectively hidden that for many years the prevailing opinion of JavaScript was that it was an unsightly, incompetent toy. My intention here is to expose the goodness in JavaScript, an outstanding, dynamic programming language. JavaScript is a block of marble, and I chip away the features that are not beautiful until the language’s true nature reveals itself. I believe that the elegant subset I carved out is vastly superior to the language as a whole, being more reliable, readable, and maintainable.”

This resonated with my own experience of the language. By eliminating the complexity of the “bad parts”, Crockford is able to present JavaScript in a way that allows the reader to quickly understand how to use the language effectively. No time is spent explaining the awful parts of JavaScript, except how to avoid them and why. Moreover, no time is spent discussing specific libraries or frameworks. Even the DOM is not addressed any more than what is absolutely necessary. This may leave some readers with unanswered questions, but Crockford is laser focused on the language itself and the book is better for it.

Although truly a masterpiece, my only humble criticism is that some of the explanations are arguably too terse and some of the code examples are more advanced than they need to be to illustrate the topic at hand. Do not expect multiple explanations for a single concept or any repetition at all. Expect a terse, no frills, right to the point, explanation with code samples heavily laced with functional-style programming. If that suites you (and it should), then you will enjoy this book.

In the terse spirit of the book, below are outlines of the good, awful, and bad parts, according to Crockford. Notice the proportions.

Good Parts

  • Functions as first class objects
  • Dynamic objects with prototypal inheritance
  • Object literals and array literals.

Awful parts

  • Global variables
  • Scope
  • Semicolon insertion
  • Reserved words
  • Unicode
  • typeof
  • parseInt
  • Floating Point
  • NaN
  • Phony Arrays
  • Falsy Values
  • hasOwnProperty
  • Object

Bad parts

  • ==
  • with Statement
  • eval
  • continue Statement
  • switch Fall Through
  • Block-less Statements
  • Bitwise Operators
  • The function Statement Versus the function Expresssion
  • Typed Wrappers
  • new
  • void

If you work with JavaScript in any capacity, I highly recommend reading this book!

Angular 2 and Plunker

I put together this plunk for a presentation I will be giving about Angular 2 at work. After working with server-side frameworks for so long, the flexibility and power of a client-side framework is truly amazing. It’s hard to believe that it is possible to code up a fully functional Angular application inside a webpage and then embed that application into another website. The web world has truly changed a lot in the last few years and significantly for the better! I only hope the enterprise world can keep up.

.NET Core Bootstrap Script for Linux

A quick script to bootstrap a dotnet core development environment for Linux Mint 18 / Ubuntu 16.04. Installs the following components.

  • .NET Core 1.1
  • Visual Studio Code with C#
  • Node.js and NPM
  • Yeoman

Usage

1
curl -s https://gist.githubusercontent.com/tschoonover/274627440d11ff32c2cd91a9d16a4974/raw/b86a6dfb39a06e49882ce8f6c698993cf8b993fd/bootstrap-dotnet | sudo bash -s

The Pajama Coder

My apprentice hard at work in her pajamas!  =)

Book Review – Object Oriented Software Construction Second Edition

Object Oriented Software Construction Second Edition by Bertrand Meyer

I discovered this book in 2007 while searching for references on the subject of object oriented programming. Although I knew the basics at the time and had been coding in OO languages for several years, I felt that I was doing it poorly and wanted to take my understanding to the next level. It did not take much time to realize that OOSC2 was generally regarded as one of the best, if not the BEST, book on the topic and so I happily spent an outrageous $78 for a new edition. That was exactly 9 years ago today and the book now sells for $120 brand new.

When it arrived I promptly read the first page, browsed through the chapters and set it aside with the sincere intention of reading it cover to cover “when I had more time.” Months passed, then years. I read many other books and continued to program in OO, but I could not seem to muster the motivation to tackle those 1200+ pages. One day I took a new job and brought this book to the office. One of the senior architects walked by and commented, “that’s one of the best books I’ve ever read.” I knew then that it was time. I cleared my schedule and over the course of many months, inched my way through it cover to cover.

Looking back, I would not recommend this book to anyone wishing to learn or improve their understanding of object oriented programming. Instead, I would recommend Head First Object-Oriented Analysis and Design. Although OOSC2 does explain all of the essential OO concepts in great detail, it reads like an academic thesis full of proofs and theorems. This is because at the time of the writing, OO was a somewhat controversial approach to software development. Meyer’s primary intention was not to make OO understandable, but to prove that OO as an end-to-end software development method was superior to all of the existing alternatives. To this end, many of the explanations and ideas are accompanied by mathematical proofs and notations which, while necessary to the progression of his arguments, only serve to frustrate those seeking to understand OO as quickly and plainly as possible.

Despite the fact that OOSC2 is not, in my opinion, the best book to learn or understand OO (although some would disagree), it is without a doubt one of the most important and influential works in the history of software engineering. As such, I recommend it highly to any person serious about software development. It is a challenging read that will add depth to your view of the craft and force you to grapple with concepts that are often taken for granted in today’s world of pervasive OO such as the superiority of single inheritance, the importance of designing by contract, the value of assertions, type checking and constrained genericity.

I thoroughly enjoyed the journey that is OOSC2 and hope you have the chance to as well!

Useful OpenSSL Commands

As a software developer, especially when working with security or web-based technologies, one is often required to deal with X.509 certificates. Although it is increasingly easy to obtain a certificate issued by a trusted certificate authority, understanding how to create and use them yourself is invaluable. Below is a list of the OpenSSL commands that have been the most useful to me.

Create a new RSA key:

openssl genrsa -out key_filename.key 2048

The genrsa command generates an random RSA private key consisting of two prime numbers. The size of the key in bits is determined by the last argument (2048 above). Larger values are more secure, but use more resources. NIST estimates that key sizes of 2048 bytes will be secure until 2030. Currently it is possible to crack a key of size 1024 in approximately 100 hours.

Create self-signed X.509 certificate:

openssl req \
    -x509 \
    -sha256 \
    -days 9999 \
    -newkey rsa:2048 \
    -keyout key_filename.key \
    -nodes \
    -out ca_filename.cer

The req command with option -x509 generates a self signed X.509 certificate which can be used for testing or as the root certificate of a certificate authority. Certificates generated in this manner are very useful for development but have limited use in production environments. A common use of self signed certificates is to enable HTTPS on a local or non-production web server.

When used in this manner, the req command will prompt for information to be included in the resulting certificate. When prompted for Common Name, provide the fully qualified domain name to be secured (ex. www.google.com). It is also possible to create a wildcard certificate which can be used to secure unlimited sub-domains by using the * character in place of a concrete sub-domain (ex. *.google.com). Note that to secure multiple domains with the same certificate (ex. www.google.com, www.google.org, www.google.net) the Subject Alternate Name (SAN) extention must be used instead. Using the SAN extension is discussed in the example for the x509 command below.

The -days option determines the number of days (relative to the current date) that the certificate will be valid. The -newkey option generates a new CSR and private key using the specified algorithm. If the -keyout option is used the generated key will be output to the specified file. Option -nodes (no DES) indicates that they generated key should be stored without encryption and can be omitted if you wish to protect the key with a pass phrase.

The -sha256 option indicates that the SHA256 hashing algorithm should be used to generate the message digest when digitally signing the certificate. Note that SHA256 is currently the default and this option can usually be omitted. It is important to use a secure digest algorithm because certificates signed with an insecure message digest such as SHA1 generate warnings in some browsers resulting in sites that clients will not consider trustworthy.

Create certificate signing request:

openssl req \
    -new \
    -sha256 \
    -key key_filename.key \
    -out request_filename.csr

The req command with option -new generates a new certificate signing request containing the information needed by a certificate authority to create a new X.509 certificate. This command prompts for the same certificate information as the example above.

The -key option specifies the key file to use. It is also common to generate a new key as part of this command by using for example -newkey rsa:2048 -keyout key_filename.key -nodes instead of -key key_filename.key. This technique eliminates the need for the use of genrsa as shown in the first example above.

Sign CSR to create new X.509 certificate:

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer

The x509 command with option -req is used to create an X.509 certificate from a certificate signing request. The -CA option specifies the certificate of the signing Certificate Authority and the -CAkey option specifies the private key to use for the digital signature of the resulting certificate. The -CAcreateserial option indicates that a new serial number file should be created if needed and that the serial number of the resulting certificate should be read from this file.

To create a certificate with the Subject Alternate Names extension, add the options -extensions v3_req -extfile san.cnf as shown below.

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer \
    -extensions v3_req \
    -extfile san.cnf

The file specified by -extfile is used to define the domain names to be secured by the certificate and should be in the following format.

[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = your.domain.name
DNS.2 = another.domain
...
DNS.x = last.domain.name

For more information about creating certificates with the SAN extension see http://techbrahmana.blogspot.com/2013/10/creating-wildcard-self-signed.html.

Combine X.509 certificate and key into PKCS12 encoding:

openssl pkcs12 \
    -export \
    -out pkcs12_filename.pfx \
    -inkey key_filename.key \
    -in certificate_filename.cer

The pkcs12 command with the -export option can be used to combine a key and certificate into a single file in PKCS12 format secured by a pass phrase. This is useful when working with Microsoft systems where PKCS12 is commonly used. Note that the file extension used for PKCS12 is typically .pfx on Microsoft systems and .p12 on Linux systems.

Book Review – The Art of Unit Testing: with examples in C#

The Art of Unit Testing: with examples in C# 2nd Edition by Roy Osherove

When I was a young programmer, I remember a job interview where I expressed my belief that coding was a form of art to a potential employer. The job was for an entry level position working with the RPG language on IBM systems. I knew very little about RPG at the time, but I was confident I could learn. I did not get the job and looking back, it’s clear to me that immaturity and lack of experience where the main reasons. Even still, the exchange about art stands out in my mind and I often wonder how that affected the outcome.

Some years later, I discussed this interview with a respected associate. He explained to me that managers and especially those above them usually see software development as a complex machine with many moving parts. In order for this machine to function efficiently, each cog and gear must be predicable, measurable and reliable. There is sometimes very little appreciation for art in this point of view and in many ways I agree. Much of software development is about the pursuit of correctness and certainty, the elimination of risk and the maximization of value. Despite this, I still believe that art must play an essential role in any venture for it to be truly worthwhile when all the amounts and values are finally summed. The Art of Unit Testing delivers the best of both worlds in a way that is direct, practical and immediately applicable.

I was first introduced to Roy Osherove’s work indirectly through the MVVM design pattern. I was researching it for a WPF project when I realized that it would be a great chance to learn about unit testing. I google’d the topic and discovered artofunittesting.com. After watching all of the videos, which were excellent, I gave the usual sales pitch to management and then opened up the beautiful can of worms that is legacy code. In the end, the decision to include unit tests in the project impacted the schedule significantly for the worse. I had to rewrite perhaps a third of the codebase and abandoned the idea of unit tests for the rest. The code was still buggy and when I joined another team a few months later, the existing unit tests were forgotten. Based on this, it would be easy to conclude that unit testing failed to produce value.

A year or so later, I decided to try TDD on a small project and picked up The Art of Unit Testing to give myself a jump-start. Although not specifically about TDD, this book is regarded as one of the best on the subject of unit testing in general, and having read it twice now, I have no reason to disagree. Unfortunately, I was not prepared for the system shock that exposure to the TDD worldview can produce. On one hand, the TDD code that I wrote was perhaps some of the best code I’ve ever written. On the other hand, the project went so far behind schedule as I adjusted that it was put on hold indefinitely. Based  on this, it would be easy to conclude that TDD specifically and unit testing in general failed to produce value.

Just last year, I was assigned a high risk, high profile project with a critical dependency on another sizable and legacy component. The main project required several complex changes to the legacy component and these assignments were given to developers who did not practice unit testing. The result was an extremely buggy and unreliable system and we wasted at least a month tracking down issues and troubleshooting them. Eventually, I realized that it would be better to rewrite it from the ground up and so I did–with a testable design and about 90% code coverage. I was able to do this in a reasonable time frame and the component has been rock solid ever since. No issues at all were found in QA and none have been found in production. Based on this it would be easy to conclude that unit testing successfully produced value and I would agree. It allowed us to complete the high risk, high profile project with a high degree of certainty about the correctness of the final behavior without negatively impacting the schedule. It was in fact the lack of unit tests and unknown code quality that posed the greatest risk to the project schedule.

It is important to realize that the last success would not have been possible without the first two failures. What you will gain from The Art of Unit Testing and pursuit of the discipline in general is not a magical power-up for your next project, but the skills you need to become a stronger developer in the long term. You’ll learn the basics of writing unit tests, you’ll survey the commonly used tools and you’ll be exposed to the concepts required to write readable, high quality, maintainable tests from the perspective of a veteran in the field. The transition to becoming proficient in this discipline won’t take place overnight, but The Art of Unit Testing will help make this worthwhile journey faster and less painful.

References in .NET

I put together the following information while troubleshooting a nasty runtime bug that manifested only after deployment to a production environment. Since then I have returned countless times to clarify my understanding. I am posting it here in the hope that it will help others as much as it has helped me.

Introduction

In .NET, code is typically compiled into files called assemblies. It is possible for code in one assembly to invoke code in another assembly if a reference is declared. In this way, code from a large variety of sources can be combined and reused. In order for this process to work, each reference must be resolved. Reference resolution is the process of locating the concrete file corresponding to the referenced assembly. It is important to understand that reference resolution occurs at both compile time and at runtime and the process for each is totally different. Failing to understand this point can lead to endless headache. Believe me, I know.

Runtime Reference Resolution (aka binding)

When an application is invoked, it must be loaded into memory. If an application uses objects in another assembly, that assembly must also be loaded into memory. The .NET framework uses the following process to do this.

  • Determine version of referenced assembly.
    • The version of the referenced assembly is written to the applications manifest at compile time. This version will be used unless overridden via configuration.
      • application/web.config
      • Publish Policy (overrides application/web.config)
      • machine.config (overrides Publish Policy and application/web.config)
  • If assembly was previously loaded, then re-use from cache.
  • If strong-name provided, search GAC.
  • Probe
    • If codebase element specified, then use.
      • Binding failure if not found.
      • Binding failure if version, culture, or public key mismatch.
    • Search application base path. Matches by simple name and fails if first match is wrong version.
      • If no culture provided, search root then root/[assembly name]
      • If culture provided, search root/[culture] then root/[culture]/[assembly name].
      • If web/app.config specifies probing element, search paths in privatePath. Paths must be relative to application root.

For more information see http://msdn.microsoft.com/en-us/library/yx7xezcf%28v=vs.110%29.aspx.

Compile Time Reference Resolution

Compile time resolution occurs in MSBuild during the build process. MSBuild is the build engine used by both Visual Studio and TFS. Note that for ASP.NET applications, there is an extra compile step that occurs for dynamic components (aspx, asc, asax, cshtml, etc) when they are first accessed. Reference resolution for these two scenarios is described below.

MSBuild

Assembly resolution occurs in the ResolveAssemblyReferences MSBuild target. This target invokes the ResolveAssemblyReference task passing the value of the AssemblySearchPaths to the SearchPaths parameter which is assigned a value as follows.

1
2
3
4
5
6
7
8
9
10
<!-- The SearchPaths property is set to find assemblies in the following order: (1) Files from current project - indicated by {CandidateAssemblyFiles} (2) $(ReferencePath) - the reference path property, which comes from the .USER file. (3) The hintpath from the referenced item itself, indicated by {HintPathFromItem}. (4) The directory of MSBuild's "target" runtime from GetFrameworkPath. The "target" runtime folder is the folder of the runtime that MSBuild is a part of. (5) Registered assembly folders, indicated by {Registry:*,*,*} (6) Legacy registered assembly folders, indicated by {AssemblyFolders} (7) Resolve to the GAC. (8) Treat the reference's Include as if it were a real file name. (9) Look in the application's output folder (like bin\debug) -->
{CandidateAssemblyFiles};
$(ReferencePath);
{HintPathFromItem};
{TargetFrameworkDirectory};
{Registry:$(FrameworkRegistryBase),$(TargetFrameworkVersion),$(AssemblyFoldersSuffix)$(AssemblyFoldersExConditions)};
{AssemblyFolders};
{GAC};
{RawFileName};
$(OutDir)

There is a lot going on here and I don’t claim to understand all of it, but I will try to point out the important parts.

  • The most common locations to find a reference are (in search order)
    • Files added manually to project (ex. <project path>/lib/coollib.dll>
    • Location specified by hint path.
    • GAC
    • Application output path.
  • References flagged with Copy Local = true are copied to the application output path *after* compilation. This means that the value of this setting has no impact in the reference resolution process for MSBuild. Note that the Copy Local UI setting maps to the <private> element in the project file.
  • MSBuild will always try to use the latest version available for a given assembly, unless specific version = true is specified. The default value for this setting is false which means that when searching the GAC, the latest version of a DLL will always be used regardless of the version specified in the project definition.

ASP.NET Runtime Compiler

Unless previously compiled into the project output folder using the pre-compile option at build time, all dynamic content (aspx, asc, asax, cshtml, etc.) is compiled once at runtime when the application is first accessed. This dynamic content can also have dependencies on other assemblies. The system.web > compilation > assemblies element is used to tell the ASP.NET runtime compiler about these dependencies so that it can reference them.

The ASP.NET runtime compiler will search the following locations in order for these references.

  • The applications private assembly cache (aka PAC) which is the <app path>/bin folder.
  • GAC (if reference is specified using strong name).

Note that by default, the root web.config references a few system assemblies and all assemblies in the PAC using the <add assembly=”*” /> wildcard syntax. This means that you will rarely ever be required to explicitly add references manually to the system.web > compilation > assemblies element. In many cases you can and should delete the element entirely. It should only contain references to assemblies stored in the GAC. Using Copy Local = true is the recommended approach to include non-GAC references required by the ASP.NET Runtime Compiler.

Also note that many subtle errors can occur if you use the system.web > compilation > assemblies element to specify a specific version number using the assembly’s strong name. The ASP.NET runtime compiler will attempt to compile using the exact version you specify. This can cause problems if the non-dynamic components of the application were compiled against a different version of the assembly during the MSBuild compilation phase. This is often the case because MSBuild will use the latest version it can find and only the exact version if you set specific version = true.

Additional Resources:

http://jack.ukleja.com/diagnosing-asp-net-page-compilation-errors/
http://blog.fredrikhaglund.se/blog/2008/02/23/get-control-over-your-assembly-dependencies/
https://dhakshinamoorthy.wordpress.com/2011/10/01/msbuild-assembly-resolve-order/
http://www.beefycode.com/post/resolving-binary-references-in-msbuild.aspx

Support No Support!

Section9 is happy to announce that we have migrated to a new hosting provider called No Support Linux. For just $1/month, they provide a single website running on Apache with 1gb storage, 30gb data transfer, cPanel, SSH to a jailed root, shared hosting on fairly fast servers and no support at all!

Although we will miss Arvixe and IIS, the performance and price of No Support Linux is hard to beat.

How to Secure Your WordPress Site with CloudFlare for Free

As modern browsers and the web community in general continue to move toward secure access protocols like HTTPS, websites that do not offer these features are increasingly at a disadvantage. In the past, configuring a website to use HTTPS could be a significant challenge even for those with a technical background. Additionally the cost of purchasing the SSL certificate required for HTTPS was often prohibitive. As a result, many website owners were forced to accept the financial and technical overhead or elected not to participate in HTTPS at all.

Fortunately, the barrier to entry is much lower now. The price of SSL certificates continue to drop and organizations like Let’s Encrypt (LE) and CloudFlare provide them free of charge along with automated configuration. While I am really excited about what LE is doing, the simplicity of securing a WordPress site with CloudFlare is impressive. Simply perform the following steps.

  1. Create a CloudFlare account at https://cloudflare.com and add your website.
  2. Create a DNS entry for your domain name and ensure that the CloudFlare option is active.
  3. Ensure that the SSL configuration for your website in CloudFlare is set to Flexible.
  4. Login to your domain registrar and update your name servers to the name servers provided by CloudFlare. (Be sure to record the original values in case you want to revert back to them.)
  5. Wait for CloudFlare to request and activate a SSL certificate for your domain.
  6. Log in to WordPress and install the CloudFlare plugin.
  7. Configure the CloudFlare plugin with your domain name, API key and API email.

That’s all. You’re done! You should now be able to access your site using HTTPS.

How it works

Among other things, CloudFlare is a reverse proxy. A reverse proxy is a service that handles incoming requests on behalf of one or more websites. Requests made to a website behind a reverse proxy are actually handled by the reverse proxy and the not the originally requested website. When the reverse proxy receives a request, it contacts the requested website on behalf of the requesting client. The requested website then responds to the reverse proxy which relays the response on to the requesting client. In other words, a reverse proxy functions as a go-between for the requesting client and the requested website. This allows the connection between the requesting client and the reverse proxy to be secured with HTTPS even though the connection between the reverse proxy and the requested website is not.

This is very convenient and requires minimal change to your website; however, there are some security implications to consider. First and foremost, both you and your visitors must trust CloudFlare to be responsible and honest. Since you do not control the private keys of the CloudFlare certificates used to secure your website, you are entirely dependent upon them for your security. The keys could be lost, stolen or abused. You have no guarantees. On the other hand, CloudFlare, as far as I can tell, is a reputable and trustworthy organization and the likelihood of disaster is probably smaller than the odds of randomly running into an iceberg in the middle of the ocean with an unsinkable ship. I’m just saying.

Secondly, communication between CloudFlare and your website is, by default, no different than before CloudFlare was involved. Unless otherwise secured, information exchanged between CloudFlare and your website is not guaranteed to be confidential or unmodified in transit. In fact, there is no guarantee that CloudFlare is even communicating with your website (and not an impostor). This is perhaps even more sinister given that the person requesting your site over HTTPS has the false impression that their communication is secure when it isn’t.

For these reasons, I do believe that LE is a much better free SSL solution than CloudFlare. Unfortunately LE requires a little more technical expertise, is not fully supported on Microsoft platforms and is currently still in beta testing. So for now, CloudFlare is our top pick but stay tuned for future developments with LE.

Return top

Info

Section9 is a computer club based out of the Springfield Missouri area. For more information, please see the About Us page or find us on Facebook.