Archive for the ‘Et cetera’ Category

Angular 2 and Plunker

I put together this plunk for a presentation I will be giving about Angular 2 at work. After working with server-side frameworks for so long, the flexibility and power of a client-side framework is truly amazing. It’s hard to believe that it is possible to code up a fully functional Angular application inside a webpage and then embed that application into another website. The web world has truly changed a lot in the last few years and significantly for the better! I only hope the enterprise world can keep up.

The Pajama Coder

My apprentice hard at work in her pajamas!  =)

Useful OpenSSL Commands

As a software developer, especially when working with security or web-based technologies, one is often required to deal with X.509 certificates. Although it is increasingly easy to obtain a certificate issued by a trusted certificate authority, understanding how to create and use them yourself is invaluable. Below is a list of the OpenSSL commands that have been the most useful to me.

Create a new RSA key:

openssl genrsa -out key_filename.key 2048

The genrsa command generates an random RSA private key consisting of two prime numbers. The size of the key in bits is determined by the last argument (2048 above). Larger values are more secure, but use more resources. NIST estimates that key sizes of 2048 bytes will be secure until 2030. Currently it is possible to crack a key of size 1024 in approximately 100 hours.

Create self-signed X.509 certificate:

openssl req \
    -x509 \
    -sha256 \
    -days 9999 \
    -newkey rsa:2048 \
    -keyout key_filename.key \
    -nodes \
    -out ca_filename.cer

The req command with option -x509 generates a self signed X.509 certificate which can be used for testing or as the root certificate of a certificate authority. Certificates generated in this manner are very useful for development but have limited use in production environments. A common use of self signed certificates is to enable HTTPS on a local or non-production web server.

When used in this manner, the req command will prompt for information to be included in the resulting certificate. When prompted for Common Name, provide the fully qualified domain name to be secured (ex. www.google.com). It is also possible to create a wildcard certificate which can be used to secure unlimited sub-domains by using the * character in place of a concrete sub-domain (ex. *.google.com). Note that to secure multiple domains with the same certificate (ex. www.google.com, www.google.org, www.google.net) the Subject Alternate Name (SAN) extention must be used instead. Using the SAN extension is discussed in the example for the x509 command below.

The -days option determines the number of days (relative to the current date) that the certificate will be valid. The -newkey option generates a new CSR and private key using the specified algorithm. If the -keyout option is used the generated key will be output to the specified file. Option -nodes (no DES) indicates that they generated key should be stored without encryption and can be omitted if you wish to protect the key with a pass phrase.

The -sha256 option indicates that the SHA256 hashing algorithm should be used to generate the message digest when digitally signing the certificate. Note that SHA256 is currently the default and this option can usually be omitted. It is important to use a secure digest algorithm because certificates signed with an insecure message digest such as SHA1 generate warnings in some browsers resulting in sites that clients will not consider trustworthy.

Create certificate signing request:

openssl req \
    -new \
    -sha256 \
    -key key_filename.key \
    -out request_filename.csr

The req command with option -new generates a new certificate signing request containing the information needed by a certificate authority to create a new X.509 certificate. This command prompts for the same certificate information as the example above.

The -key option specifies the key file to use. It is also common to generate a new key as part of this command by using for example -newkey rsa:2048 -keyout key_filename.key -nodes instead of -key key_filename.key. This technique eliminates the need for the use of genrsa as shown in the first example above.

Sign CSR to create new X.509 certificate:

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer

The x509 command with option -req is used to create an X.509 certificate from a certificate signing request. The -CA option specifies the certificate of the signing Certificate Authority and the -CAkey option specifies the private key to use for the digital signature of the resulting certificate. The -CAcreateserial option indicates that a new serial number file should be created if needed and that the serial number of the resulting certificate should be read from this file.

To create a certificate with the Subject Alternate Names extension, add the options -extensions v3_req -extfile san.cnf as shown below.

openssl x509 \
    -req \
    -sha256 \
    -days 9999 \
    -in request_filename.csr \
    -CA ca_filename.cer \
    -CAkey key_filename.key \
    -CAcreateserial \
    -out certificate_filename.cer \
    -extensions v3_req \
    -extfile san.cnf

The file specified by -extfile is used to define the domain names to be secured by the certificate and should be in the following format.

[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = your.domain.name
DNS.2 = another.domain
...
DNS.x = last.domain.name

For more information about creating certificates with the SAN extension see http://techbrahmana.blogspot.com/2013/10/creating-wildcard-self-signed.html.

Combine X.509 certificate and key into PKCS12 encoding:

openssl pkcs12 \
    -export \
    -out pkcs12_filename.pfx \
    -inkey key_filename.key \
    -in certificate_filename.cer

The pkcs12 command with the -export option can be used to combine a key and certificate into a single file in PKCS12 format secured by a pass phrase. This is useful when working with Microsoft systems where PKCS12 is commonly used. Note that the file extension used for PKCS12 is typically .pfx on Microsoft systems and .p12 on Linux systems.

References in .NET

I put together the following information while troubleshooting a nasty runtime bug that manifested only after deployment to a production environment. Since then I have returned countless times to clarify my understanding. I am posting it here in the hope that it will help others as much as it has helped me.

Introduction

In .NET, code is typically compiled into files called assemblies. It is possible for code in one assembly to invoke code in another assembly if a reference is declared. In this way, code from a large variety of sources can be combined and reused. In order for this process to work, each reference must be resolved. Reference resolution is the process of locating the concrete file corresponding to the referenced assembly. It is important to understand that reference resolution occurs at both compile time and at runtime and the process for each is totally different. Failing to understand this point can lead to endless headache. Believe me, I know.

Runtime Reference Resolution (aka binding)

When an application is invoked, it must be loaded into memory. If an application uses objects in another assembly, that assembly must also be loaded into memory. The .NET framework uses the following process to do this.

  • Determine version of referenced assembly.
    • The version of the referenced assembly is written to the applications manifest at compile time. This version will be used unless overridden via configuration.
      • application/web.config
      • Publish Policy (overrides application/web.config)
      • machine.config (overrides Publish Policy and application/web.config)
  • If assembly was previously loaded, then re-use from cache.
  • If strong-name provided, search GAC.
  • Probe
    • If codebase element specified, then use.
      • Binding failure if not found.
      • Binding failure if version, culture, or public key mismatch.
    • Search application base path. Matches by simple name and fails if first match is wrong version.
      • If no culture provided, search root then root/[assembly name]
      • If culture provided, search root/[culture] then root/[culture]/[assembly name].
      • If web/app.config specifies probing element, search paths in privatePath. Paths must be relative to application root.

For more information see http://msdn.microsoft.com/en-us/library/yx7xezcf%28v=vs.110%29.aspx.

Compile Time Reference Resolution

Compile time resolution occurs in MSBuild during the build process. MSBuild is the build engine used by both Visual Studio and TFS. Note that for ASP.NET applications, there is an extra compile step that occurs for dynamic components (aspx, asc, asax, cshtml, etc) when they are first accessed. Reference resolution for these two scenarios is described below.

MSBuild

Assembly resolution occurs in the ResolveAssemblyReferences MSBuild target. This target invokes the ResolveAssemblyReference task passing the value of the AssemblySearchPaths to the SearchPaths parameter which is assigned a value as follows.

1
2
3
4
5
6
7
8
9
10
<!-- The SearchPaths property is set to find assemblies in the following order: (1) Files from current project - indicated by {CandidateAssemblyFiles} (2) $(ReferencePath) - the reference path property, which comes from the .USER file. (3) The hintpath from the referenced item itself, indicated by {HintPathFromItem}. (4) The directory of MSBuild's "target" runtime from GetFrameworkPath. The "target" runtime folder is the folder of the runtime that MSBuild is a part of. (5) Registered assembly folders, indicated by {Registry:*,*,*} (6) Legacy registered assembly folders, indicated by {AssemblyFolders} (7) Resolve to the GAC. (8) Treat the reference's Include as if it were a real file name. (9) Look in the application's output folder (like bin\debug) -->
{CandidateAssemblyFiles};
$(ReferencePath);
{HintPathFromItem};
{TargetFrameworkDirectory};
{Registry:$(FrameworkRegistryBase),$(TargetFrameworkVersion),$(AssemblyFoldersSuffix)$(AssemblyFoldersExConditions)};
{AssemblyFolders};
{GAC};
{RawFileName};
$(OutDir)

There is a lot going on here and I don’t claim to understand all of it, but I will try to point out the important parts.

  • The most common locations to find a reference are (in search order)
    • Files added manually to project (ex. <project path>/lib/coollib.dll>
    • Location specified by hint path.
    • GAC
    • Application output path.
  • References flagged with Copy Local = true are copied to the application output path *after* compilation. This means that the value of this setting has no impact in the reference resolution process for MSBuild. Note that the Copy Local UI setting maps to the <private> element in the project file.
  • MSBuild will always try to use the latest version available for a given assembly, unless specific version = true is specified. The default value for this setting is false which means that when searching the GAC, the latest version of a DLL will always be used regardless of the version specified in the project definition.

ASP.NET Runtime Compiler

Unless previously compiled into the project output folder using the pre-compile option at build time, all dynamic content (aspx, asc, asax, cshtml, etc.) is compiled once at runtime when the application is first accessed. This dynamic content can also have dependencies on other assemblies. The system.web > compilation > assemblies element is used to tell the ASP.NET runtime compiler about these dependencies so that it can reference them.

The ASP.NET runtime compiler will search the following locations in order for these references.

  • The applications private assembly cache (aka PAC) which is the <app path>/bin folder.
  • GAC (if reference is specified using strong name).

Note that by default, the root web.config references a few system assemblies and all assemblies in the PAC using the <add assembly=”*” /> wildcard syntax. This means that you will rarely ever be required to explicitly add references manually to the system.web > compilation > assemblies element. In many cases you can and should delete the element entirely. It should only contain references to assemblies stored in the GAC. Using Copy Local = true is the recommended approach to include non-GAC references required by the ASP.NET Runtime Compiler.

Also note that many subtle errors can occur if you use the system.web > compilation > assemblies element to specify a specific version number using the assembly’s strong name. The ASP.NET runtime compiler will attempt to compile using the exact version you specify. This can cause problems if the non-dynamic components of the application were compiled against a different version of the assembly during the MSBuild compilation phase. This is often the case because MSBuild will use the latest version it can find and only the exact version if you set specific version = true.

Additional Resources:

http://jack.ukleja.com/diagnosing-asp-net-page-compilation-errors/
http://blog.fredrikhaglund.se/blog/2008/02/23/get-control-over-your-assembly-dependencies/
https://dhakshinamoorthy.wordpress.com/2011/10/01/msbuild-assembly-resolve-order/
http://www.beefycode.com/post/resolving-binary-references-in-msbuild.aspx

Unicode Emoji(!?!)

I had no idea there were Unicode emoji codes that are standardized across platforms! http://unicode.org/emoji/charts/full-emoji-list.html 🙃

There are some fun extensions for Chrome, too. I searched for “unicode emoji” and found a couple that work well, a couple that didn’t. YMMV, so I’m reluctant to recommend one of those. Just try them out. 😎

Support No Support!

Section9 is happy to announce that we have migrated to a new hosting provider called No Support Linux. For just $1/month, they provide a single website running on Apache with 1gb storage, 30gb data transfer, cPanel, SSH to a jailed root, shared hosting on fairly fast servers and no support at all!

Although we will miss Arvixe and IIS, the performance and price of No Support Linux is hard to beat.

How to Secure Your WordPress Site with CloudFlare for Free

As modern browsers and the web community in general continue to move toward secure access protocols like HTTPS, websites that do not offer these features are increasingly at a disadvantage. In the past, configuring a website to use HTTPS could be a significant challenge even for those with a technical background. Additionally the cost of purchasing the SSL certificate required for HTTPS was often prohibitive. As a result, many website owners were forced to accept the financial and technical overhead or elected not to participate in HTTPS at all.

Fortunately, the barrier to entry is much lower now. The price of SSL certificates continue to drop and organizations like Let’s Encrypt (LE) and CloudFlare provide them free of charge along with automated configuration. While I am really excited about what LE is doing, the simplicity of securing a WordPress site with CloudFlare is impressive. Simply perform the following steps.

  1. Create a CloudFlare account at https://cloudflare.com and add your website.
  2. Create a DNS entry for your domain name and ensure that the CloudFlare option is active.
  3. Ensure that the SSL configuration for your website in CloudFlare is set to Flexible.
  4. Login to your domain registrar and update your name servers to the name servers provided by CloudFlare. (Be sure to record the original values in case you want to revert back to them.)
  5. Wait for CloudFlare to request and activate a SSL certificate for your domain.
  6. Log in to WordPress and install the CloudFlare plugin.
  7. Configure the CloudFlare plugin with your domain name, API key and API email.

That’s all. You’re done! You should now be able to access your site using HTTPS.

How it works

Among other things, CloudFlare is a reverse proxy. A reverse proxy is a service that handles incoming requests on behalf of one or more websites. Requests made to a website behind a reverse proxy are actually handled by the reverse proxy and the not the originally requested website. When the reverse proxy receives a request, it contacts the requested website on behalf of the requesting client. The requested website then responds to the reverse proxy which relays the response on to the requesting client. In other words, a reverse proxy functions as a go-between for the requesting client and the requested website. This allows the connection between the requesting client and the reverse proxy to be secured with HTTPS even though the connection between the reverse proxy and the requested website is not.

This is very convenient and requires minimal change to your website; however, there are some security implications to consider. First and foremost, both you and your visitors must trust CloudFlare to be responsible and honest. Since you do not control the private keys of the CloudFlare certificates used to secure your website, you are entirely dependent upon them for your security. The keys could be lost, stolen or abused. You have no guarantees. On the other hand, CloudFlare, as far as I can tell, is a reputable and trustworthy organization and the likelihood of disaster is probably smaller than the odds of randomly running into an iceberg in the middle of the ocean with an unsinkable ship. I’m just saying.

Secondly, communication between CloudFlare and your website is, by default, no different than before CloudFlare was involved. Unless otherwise secured, information exchanged between CloudFlare and your website is not guaranteed to be confidential or unmodified in transit. In fact, there is no guarantee that CloudFlare is even communicating with your website (and not an impostor). This is perhaps even more sinister given that the person requesting your site over HTTPS has the false impression that their communication is secure when it isn’t.

For these reasons, I do believe that LE is a much better free SSL solution than CloudFlare. Unfortunately LE requires a little more technical expertise, is not fully supported on Microsoft platforms and is currently still in beta testing. So for now, CloudFlare is our top pick but stay tuned for future developments with LE.

An Introduction to SSL, TLS and HTTPS

shieldSecure Socket Layers (SSL) refers to a set of cryptographic protocols originally developed by Netscape for the purpose of securing communication between endpoints in a network. Due to security vulnerabilities, all versions of SSL have been deprecated and use of Transport Layer Security (TLS) is strongly advised. Because TLS is essentially a newer version of SSL, the term SSL is commonly used to mean either SSL or TLS.

Secure communication with a website is accomplished by means of the HTTPS protocol which is simply the use of SSL/TLS to encrypt HTTP messages. All modern browsers are capable of HTTPS communication, but it must be manually enabled on the website before it can be used.

To enable HTTPS for a website, an X.509 certificate is required. These certificates are typically purchased from a Certificate Authority (CA) such as Symantec, VeriSign, Thawte or GoDaddy and can be fairly expensive. An X.509 certificate contains information about who it was issued to (usually a website domain name), who it was issued by (usually a CA) and a public key which can be used for encryption and decryption. The public key in the certificate is mathematically related to a private key known only to the owner of the certificate. Information encrypted with the public key can only be decrypted with the private key and vice versa. This is known as asymmetric key encryption.

When a website resource is requested using HTTPS a SSL/TLS handshake must occur before any information can be exchanged. The purpose of this handshake is to verify the identify of the website, establish which cryptographic algorithms to use (the cryptosuite) and agree upon a shared master key both parties can use for encryption and decryption. In general, the process consists of the following steps. For a more detailed explanation, Chapter 4. Transport Layer Security of High Performance Browser Networking by Ilya Grigorik provides an excellent description.

  1. A TCP/IP connection is established.
  2. The browser sends information about which protocol versions and ciphersuites it supports.
  3. The server selects a protocol version and ciphersuite and attaches the website’s X.509 certificate.
  4. The browser validates the certificate, generates a master key and sends it securely to the server by encrypting it with the public key in the provided certificate.
  5. The server decrypts the master key with it’s own private key and notifies the client that it is ready to proceed.

It is worth noting that the certificate’s public key is only used once (to encrypt the shared master key in step 4 above). Although it would be possible to use the certificate’s public key to encrypt and decrypt all data sent to and from the server (eliminating the need for a shared master key altogether), it is not practical. Asymmetric key encryption is significantly slower than symmetric key encryption. Therefore, in order to maximize performance, asymmetric key encryption is used only in the handshake and symmetric key encryption is used for the remainder of the connection.

Since encryption alone can only guarantee privacy, another important aspect of the handshake is the certificate validation process. This step verifies the identity of the website and ensures that the browser is not communicating with an impostor. Certificate validation is based on a system of trust. Every X.509 certificate is signed by another X.509 certificate. This signifies that the owner of the signing certificate trusts the owner of the signed certificate. In this way, any given X.509 certificate forms a node in a chain of trust. The root certificate in every chain of trust is self-signed and must be trusted explicitly.

Typically, website certificates are leaf nodes in the trust chain and CA certificates are root nodes. Most browsers ship with a list of trusted root certificates from well known and trustworthy CA’s. Most operating systems also ship with a similar list of trusted root certificates and also provide a way for users to add new certificates to this list. In general, certificate validation in the SSL/TLS handshake simply verifies that the certificate presented by the website matches the domain name that was requested, that is has not expired or been revoked and that it chains up to an explicitly trusted root.

If any part of this process fails, the browser will inform the user that there is a problem with the certificate and may also provide an option to continue. When this occurs, there is no guarantee that the connection is being made to the intended website or that any information exchanged will be private. Although this sounds very serious, it may often be acceptable to proceed despite the warning. Ignoring a certificate validation warning is no less secure than accessing a website with the HTTP protocol (no security at all). Although it is not ideal to access any site over HTTP, it is nevertheless common practice and often the only option available. For websites that require the exchange of financial, personal or otherwise private information, a valid HTTPS connection should always be used.

In the end, the most important thing to understand about these protocols is what aspects they guarantee about communication.

  1. Confidentiality – Communication is private. This is achieved by encrypting all data with a key known only to the communicating parties.
  2. Integrity – Communication cannot be altered without detection. Although not discussed above, a Message Authenticate Code (MAC) is included in every exchange. This allows the receiver to verify that the message was not modified or corrupted in any way since the MAC was calculated.
  3. Authenticity – Communication is occurring with the intended party and not an impostor. This is verified during the certificate validation process of the SSL/TLS handshake. A fully trusted certificate implies that the owner is who they claim to be and that they (and no one else) control the certificate’s private key.

Control Hardware via Web Page using Adafruit PWM/Servo Pi Hat for Pan/Tilt Camera Mount

Recently we have been looking into building a robot that is good for educational purposes. One of our design possibilities was to use a web interface to control hardware. I did not know if this was possible. It turns out that it really is not that difficult. You can see what I’ve done with the project so far here: https://github.com/lelandg/PWM-Servo-Hat-Through-Apache-Demo

Here’s a screenshot of the current version (click to see full-size):

2016-03-29 08_39_04-192.168.1.91_PiServed_show_info

Note: debug output will show up after you submit the form one or more times and/or while the page is processing (servos are moving in this case).

I have plans to add more customization. I have an RGB LED hooked up to 3 of the PWM output ports, and I’ve confirmed that it works with a test Python script, so I’m working on the HTML embedded within the Python. Once I have it working, I’ll upload that, but probably will not update the image on this page. (Because then it would never stop changing.) If significant changes are made, I may create a new post detailing any trouble I encountered. (Really not much at all so far on this one.)

And I may add more items as they occur to me.


Contact Boogieman (Leland)


 

The Ghost of Roboduck Jones

An open letter to BL from T.

So I was sitting here reminiscing and guess what popped into my head? Roboduck! Do you guys remember that project? Just a little robotic duck that swam around teasing other ducks to suck on a shotgun. I think it sank. There may have been plans for a flamethrower. There should always be plans for a flame thrower. I wish I still had the pics. Alas but no, nor does wayback machine but I was surprised to discover that a new group holds bltlabs.com. How interesting.

<30 minutes pass>

Well look who I found hanging out in an archive of jointsandjams.com. I feel better, but now I want to work on jointsandjams.com again.

roboduck

In case you are interested, I have cobbled together a little Facebook community page for section9 and shared a few of our projects of the years. How the hell did we ever have time for any of this? The newfangled Raspberry Pi’s are just not the same. It’s not fun unless it’s a hand grenade. You can quote me on that.

https://www.facebook.com/section9.space

Return top

Info

Section9 is a hackerspace based out of the Springfield Missouri area. For more information, please see the About Us page or find us on Facebook.