How to Secure Your WordPress Site with CloudFlare for Free

As modern browsers and the web community in general continue to move toward secure access protocols like HTTPS, websites that do not offer these features are increasingly at a disadvantage. In the past, configuring a website to use HTTPS could be a significant challenge even for those with a technical background. Additionally the cost of purchasing the SSL certificate required for HTTPS was often prohibitive. As a result, many website owners were forced to accept the financial and technical overhead or elected not to participate in HTTPS at all.

Fortunately, the barrier to entry is much lower now. The price of SSL certificates continue to drop and organizations like Let’s Encrypt (LE) and CloudFlare provide them free of charge along with automated configuration. While I am really excited about what LE is doing, the simplicity of securing a WordPress site with CloudFlare is impressive. Simply perform the following steps.

  1. Create a CloudFlare account at https://cloudflare.com and add your website.
  2. Create a DNS entry for your domain name and ensure that the CloudFlare option is active.
  3. Ensure that the SSL configuration for your website in CloudFlare is set to Flexible.
  4. Login to your domain registrar and update your name servers to the name servers provided by CloudFlare. (Be sure to record the original values in case you want to revert back to them.)
  5. Wait for CloudFlare to request and activate a SSL certificate for your domain.
  6. Log in to WordPress and install the CloudFlare plugin.
  7. Configure the CloudFlare plugin with your domain name, API key and API email.

That’s all. You’re done! You should now be able to access your site using HTTPS.

How it works

Among other things, CloudFlare is a reverse proxy. A reverse proxy is a service that handles incoming requests on behalf of one or more websites. Requests made to a website behind a reverse proxy are actually handled by the reverse proxy and the not the originally requested website. When the reverse proxy receives a request, it contacts the requested website on behalf of the requesting client. The requested website then responds to the reverse proxy which relays the response on to the requesting client. In other words, a reverse proxy functions as a go-between for the requesting client and the requested website. This allows the connection between the requesting client and the reverse proxy to be secured with HTTPS even though the connection between the reverse proxy and the requested website is not.

This is very convenient and requires minimal change to your website; however, there are some security implications to consider. First and foremost, both you and your visitors must trust CloudFlare to be responsible and honest. Since you do not control the private keys of the CloudFlare certificates used to secure your website, you are entirely dependent upon them for your security. The keys could be lost, stolen or abused. You have no guarantees. On the other hand, CloudFlare, as far as I can tell, is a reputable and trustworthy organization and the likelihood of disaster is probably smaller than the odds of randomly running into an iceberg in the middle of the ocean with an unsinkable ship. I’m just saying.

Secondly, communication between CloudFlare and your website is, by default, no different than before CloudFlare was involved. Unless otherwise secured, information exchanged between CloudFlare and your website is not guaranteed to be confidential or unmodified in transit. In fact, there is no guarantee that CloudFlare is even communicating with your website (and not an impostor). This is perhaps even more sinister given that the person requesting your site over HTTPS has the false impression that their communication is secure when it isn’t.

For these reasons, I do believe that LE is a much better free SSL solution than CloudFlare. Unfortunately LE requires a little more technical expertise, is not fully supported on Microsoft platforms and is currently still in beta testing. So for now, CloudFlare is our top pick but stay tuned for future developments with LE.

An Introduction to SSL, TLS and HTTPS

shieldSecure Socket Layers (SSL) refers to a set of cryptographic protocols originally developed by Netscape for the purpose of securing communication between endpoints in a network. Due to security vulnerabilities, all versions of SSL have been deprecated and use of Transport Layer Security (TLS) is strongly advised. Because TLS is essentially a newer version of SSL, the term SSL is commonly used to mean either SSL or TLS.

Secure communication with a website is accomplished by means of the HTTPS protocol which is simply the use of SSL/TLS to encrypt HTTP messages. All modern browsers are capable of HTTPS communication, but it must be manually enabled on the website before it can be used.

To enable HTTPS for a website, an X.509 certificate is required. These certificates are typically purchased from a Certificate Authority (CA) such as Symantec, VeriSign, Thawte or GoDaddy and can be fairly expensive. An X.509 certificate contains information about who it was issued to (usually a website domain name), who it was issued by (usually a CA) and a public key which can be used for encryption and decryption. The public key in the certificate is mathematically related to a private key known only to the owner of the certificate. Information encrypted with the public key can only be decrypted with the private key and vice versa. This is known as asymmetric key encryption.

When a website resource is requested using HTTPS a SSL/TLS handshake must occur before any information can be exchanged. The purpose of this handshake is to verify the identify of the website, establish which cryptographic algorithms to use (the cryptosuite) and agree upon a shared master key both parties can use for encryption and decryption. In general, the process consists of the following steps. For a more detailed explanation, Chapter 4. Transport Layer Security of High Performance Browser Networking by Ilya Grigorik provides an excellent description.

  1. A TCP/IP connection is established.
  2. The browser sends information about which protocol versions and ciphersuites it supports.
  3. The server selects a protocol version and ciphersuite and attaches the website’s X.509 certificate.
  4. The browser validates the certificate, generates a master key and sends it securely to the server by encrypting it with the public key in the provided certificate.
  5. The server decrypts the master key with it’s own private key and notifies the client that it is ready to proceed.

It is worth noting that the certificate’s public key is only used once (to encrypt the shared master key in step 4 above). Although it would be possible to use the certificate’s public key to encrypt and decrypt all data sent to and from the server (eliminating the need for a shared master key altogether), it is not practical. Asymmetric key encryption is significantly slower than symmetric key encryption. Therefore, in order to maximize performance, asymmetric key encryption is used only in the handshake and symmetric key encryption is used for the remainder of the connection.

Since encryption alone can only guarantee privacy, another important aspect of the handshake is the certificate validation process. This step verifies the identity of the website and ensures that the browser is not communicating with an impostor. Certificate validation is based on a system of trust. Every X.509 certificate is signed by another X.509 certificate. This signifies that the owner of the signing certificate trusts the owner of the signed certificate. In this way, any given X.509 certificate forms a node in a chain of trust. The root certificate in every chain of trust is self-signed and must be trusted explicitly.

Typically, website certificates are leaf nodes in the trust chain and CA certificates are root nodes. Most browsers ship with a list of trusted root certificates from well known and trustworthy CA’s. Most operating systems also ship with a similar list of trusted root certificates and also provide a way for users to add new certificates to this list. In general, certificate validation in the SSL/TLS handshake simply verifies that the certificate presented by the website matches the domain name that was requested, that is has not expired or been revoked and that it chains up to an explicitly trusted root.

If any part of this process fails, the browser will inform the user that there is a problem with the certificate and may also provide an option to continue. When this occurs, there is no guarantee that the connection is being made to the intended website or that any information exchanged will be private. Although this sounds very serious, it may often be acceptable to proceed despite the warning. Ignoring a certificate validation warning is no less secure than accessing a website with the HTTP protocol (no security at all). Although it is not ideal to access any site over HTTP, it is nevertheless common practice and often the only option available. For websites that require the exchange of financial, personal or otherwise private information, a valid HTTPS connection should always be used.

In the end, the most important thing to understand about these protocols is what aspects they guarantee about communication.

  1. Confidentiality – Communication is private. This is achieved by encrypting all data with a key known only to the communicating parties.
  2. Integrity – Communication cannot be altered without detection. Although not discussed above, a Message Authenticate Code (MAC) is included in every exchange. This allows the receiver to verify that the message was not modified or corrupted in any way since the MAC was calculated.
  3. Authenticity – Communication is occurring with the intended party and not an impostor. This is verified during the certificate validation process of the SSL/TLS handshake. A fully trusted certificate implies that the owner is who they claim to be and that they (and no one else) control the certificate’s private key.

Section9 Linux tip: aliases for installing and uninstalling packages (and related)

This will be a simple post. I just wanted to share some useful things to make installing and maintaining packages much simpler (not that apt is bad).

To enable these features simply open a console window and:

sudo nano ~/.bashrc
*Note: You may not always need “sudo” to edit this file, but on some systems you may.

Scroll to the end of the file and add these lines, exactly like this:

alias install='sudo apt-get install -y'
alias purge='sudo apt-get purge -y'
alias autoclean='sudo apt-get --autoclean'
alias show='apt-cache show'
alias search='apt-cache search'

Finally, type Ctrl-X and when prompted to save type ‘y’ and then press <Enter> at the next prompt. You will then need to log off and back on, or else type:

. ~/.bashrc

BONUS TIP: Beginning a line with ‘.’ lets you run shell and startup scripts

Also be aware that you may see errors if you run your .bashrc like this. They should be safe enough, but if you experience problems, log off and back on or reboot.

You can make sure you have it setup correctly by typing:

install

And pressing <Enter>. If you get a “command not found” error then you did not get the script to execute. Go ahead and reboot in this case and it should work from then on. 🙂

This allows you to do things like:

install [someprogram]
purge [someotherprogram]
autoclean
show [package_or_program_name]
search [what_was_that_program?]

When using this search command, if you do not see any matches to your “search”, try a simpler search. You can use parts of words. And you can still type –help as a parameter to any of these new shell commands and it will show you the appropriate help screen. They really are just aliases: you still run the program they point to. In other words, for the search command, you really are running apt-cache search such that if you pass it –help, you should get help on apt-cache search features (and apt-cache in general). In this case we really are running apt-cache.


 

I hope this little introduction to using aliases for apt is instructive and/or helpful. If you think so, I hope you’ll subscribe to this blog, either with RSS or good-ol’ email. Or simply follow me (Leland Green) on Google+ or Facebook (and soon on my own web site, I hope).

Thank you for reading,
Leland…

Section9 Linux Tip: ls

ls

Take some time to learn some of the options available with this powerful utility that does more than just “list files”, including:

  • -R = Recurse directories.
  • -l = “Long” listing–show file sizes, timestamps and permissions.
  • –color = Colorize output, or disable colors with ‘ls –color=never’.
  • -a = All files–show everything, including “hidden” files (those beginning with a ‘.’ (period)).
  • -A = Almost all files. Like -a but does not show ‘.’ or ‘..’ shortcuts (since they are always present).
  • -S = sort output (see ‘ls –help’ for details on parameters for sorting, or use the individual “sort switches”, such as:
    • -t = sort by timestamp
    • -S = sort by size
    • -v = sort by version
    • -X = sort by file extension

Given all of that, can you guess what my favorite alias command for ls is? I’ll give you a hint: it’s one of these (from my ~/.bashrc file):

alias ll='ls -l -a --color'
alias la='ls -A --color'
alias l='ls -CF --color'
alias l1='ls -1 -a --color=never'

Note: That last one is lowercase ‘L’ and the digit ‘1’ (one).

How do you use ls? Please share in the comments below. I will acknowledge all ideas you give me (and, in fact, by leaving a comment you can “seal it in stone”).


If you found this page useful or interesting, please stay tuned to the “Linux Tips” tag. We here at Section9 always appreciate any shares and/or links to our pages. Link to one of our pages on your site, then post a comment here with a link back to your page that you put the link on. In this way you can promote your website on Section9.space (and we encourage you to do so).

I would also appreciate reports of any errors, typos, mis-statements and anything else you care to nit-pick. 🙂 Leave me a comment here (preferred), or send a private message (PM) on any of the social media sites that you find me on. (Google+, Facebook, Section9.space, Twitter, Instagram, etc.) I try to always at least acknowledge all questions within 24 hours. If I don’t know the answer, I’ll either google it before replying, or at sometime after that, depending on how difficult I think a true answer will be to write.)

Thank you for your interest,
Leland Green…


Contact me with a private message:


Or, better yet, leave a public comment:

Section9 Linux Tip: screen

Let’s start the Linux Tips series with a command for everyone:

  • screen

This a cool program that lets you load multiple terminal sessions on one Linux machine, all within one window.

You can use:

screen -r

to reconnect to your previous session if the connection is dropped (but the Linux box is still running). Then *all* your sessions are still open! (When we ran Linux servers at work we were required to do this for a period of time, and I came to love screen!

Create a new session:

<Ctrl>-A <Ctrl>-C

Switch to the “next” session with:

<Ctrl>-N

Use one window, ssh to your “primary” host, run screen and then open an additional session for each machine you want to SSH to, then just ssh to each machine in its own screen session.

In my ~/.bashrc I have this line to make a new “command” named ‘scr’:

alias scr='screen -A -h 9999 -O -q'

Then I can just start it with ‘scr‘ or ‘scr -r‘ if I want to reconnect to a already-running session. (You may want other options, too. See what’s available by running ‘scr –help‘.


If you found this page useful or interesting, please stay tuned to the “Linux Tips” tag. We here at Section9 always appreciate any shares and/or links to our pages. Link to one of our pages on your site, then post a comment here with a link back to your page that you put the link on. In this way you can promote your website on Section9.space (and we encourage you to do so).

I would also appreciate reports of any errors, typos, mis-statements and anything else you care to nit-pick. 🙂 Leave me a comment here (preferred), or send a private message (PM) on any of the social media sites that you find me on. (Google+, Facebook, Section9.space, Twitter, Instagram, etc.) I try to always at least acknowledge all questions within 24 hours. If I don’t know the answer, I’ll either google it before replying, or at sometime after that, depending on how difficult I think a true answer will be to write.)

Thank you for your interest,
Leland Green…

Jetpack and CloudFlare

As some of you may have noticed, we are now publishing all of our content to several social media platforms thanks to the Jetpack plugin for WordPress. This is perhaps the single most valuable plug-in I’ve seen offering features such as free CDN for images, brute force attack protection, single sign on, downtime monitoring and notification, site stats and analytics, social media integration, automatic plug-in updating and much more.

Additionally, we’ve migrated our DNS provider from Arvixe to CloudFlare thanks to a tip from Mike at PowerShellStation. CloudFlare is a reverse proxy that offers a host of free network services like DNS, firewall, globally distributed caching and SSL.

Both CloudFlare and Jetpack have enterprise options that you can purchase (like data backups and 100% uptime guarantee for example), but what they offer for free is truly amazing and incredibly easy to configure.

Control Hardware via Web Page using Adafruit PWM/Servo Pi Hat for Pan/Tilt Camera Mount

Recently we have been looking into building a robot that is good for educational purposes. One of our design possibilities was to use a web interface to control hardware. I did not know if this was possible. It turns out that it really is not that difficult. You can see what I’ve done with the project so far here: https://github.com/lelandg/PWM-Servo-Hat-Through-Apache-Demo

Here’s a screenshot of the current version (click to see full-size):

2016-03-29 08_39_04-192.168.1.91_PiServed_show_info

Note: debug output will show up after you submit the form one or more times and/or while the page is processing (servos are moving in this case).

I have plans to add more customization. I have an RGB LED hooked up to 3 of the PWM output ports, and I’ve confirmed that it works with a test Python script, so I’m working on the HTML embedded within the Python. Once I have it working, I’ll upload that, but probably will not update the image on this page. (Because then it would never stop changing.) If significant changes are made, I may create a new post detailing any trouble I encountered. (Really not much at all so far on this one.)

And I may add more items as they occur to me.


Contact Boogieman (Leland)


 

The Ghost of Roboduck Jones

An open letter to BL from T.

So I was sitting here reminiscing and guess what popped into my head? Roboduck! Do you guys remember that project? Just a little robotic duck that swam around teasing other ducks to suck on a shotgun. I think it sank. There may have been plans for a flamethrower. There should always be plans for a flame thrower. I wish I still had the pics. Alas but no, nor does wayback machine but I was surprised to discover that a new group holds bltlabs.com. How interesting.

<30 minutes pass>

Well look who I found hanging out in an archive of jointsandjams.com. I feel better, but now I want to work on jointsandjams.com again.

roboduck

In case you are interested, I have cobbled together a little Facebook community page for section9 and shared a few of our projects of the years. How the hell did we ever have time for any of this? The newfangled Raspberry Pi’s are just not the same. It’s not fun unless it’s a hand grenade. You can quote me on that.

https://www.facebook.com/section9.space

Adventures with CyanoGenMod!

Cyanogen_logoPart of the Mjolnir project involved rooting a Barnes and Noble Nook Tablet in order to develop and install the Mjolnir Control Panel app (MCP). The stock B&N version of Android is extremely limited and does not allow access to the Play store or installation of 3rd party apps. Rooting the device allowed me to bypass these restrictions, but the process broke a lot of functionality on the device so I decided to restore to the factory default configuration. Unfortunately, this was even more disastrous and put the device into a state where it could not get past the initial B&N registration screen. At that point I decided to replace the stock ROM entirely and selected CyanoGenMod after some research on the xda-developer forums. My first attempt was not successful but after some after further research I realized I had been following instructions specific to Nook Color rather than Nook tablet. After some additional research and tinkering I was able to use ClockworkMod to burn a CyanoGenMod v12.1 ROM onto my Nook’s internal eMMC storage along with TWRP 2.8.7 and Open GApps 5.1.

Sifting through all of the various and often conflicting information on how to accomplish this was a bit tedious, but in the end the process was simple and smooth. My overall impression of CyanoGenMod is positive although I must admit that it doesn’t seem to offer much in the way of functionality above and beyond stock Android. It does, however, offer remote location, remote wipe, WiFi tethering and an audio configuration tool all of which look interesting. To get all of the special CyanoGenMod features you need to register for an account (which I did not do).

At any rate, it’s great to have access a recent version of Android and the full Google Play store plus the additional CyanoGenMod features on such an old device that prior to this was just a brick on the shelf for years except for the rare occasion when I needed to take Mjolnir out for a drive. 🙂

Mojave Grass Detector

An update from Nero:

Hello Bot-Doctors,

I started working on the mow-bot project again and want to show-and-tell my current work.

Picture 1 mainly shows the grass counter, which is just a prototype.  The counter is constructed of 6 slats of wood glued to a 2×4.  The 6 slats make up 5 compartments.  Each compartment has a laser on one side and a light sensor on the other.  The design is simple.  The laser beams directly on the light sensor and sends a High to the Arduino.  When grass moves through the beam, a Low is sent.  The Arduino (a Nano located on the right side of the breadboard connected to the 2×4) counts the Lows for a small amount of time.  So, during that given amount of time, the compartments with the higher counts will have more uncut grass passing through them than the lower counts.  The 5 compartments’ counts are sent to the master board (located in the cardboard box) using I2C communication.

Also on the 2×4 breadboard is a 5v and 3.3 volt power supply module which is powered by the 9 volt battery.  The modules are very nice.  They are made to plug into the breadboard (in the positive and ground pins on both sides of the board) and have jumpers (on each side) that allow you to select 5 or 3.3 volts.  So you can power one rail of the board with 5 volts and 3.3 on the other or the same voltage on both sides.

Picture 2 is just a close-up of the 2×4 breadboard.  In the middle are the 5 ICs that convert the light signals to High/Lows and send the result back to a digital pin on the Nano board.  I bought the ready-made ICs on ebay for about $1 a piece.

Picture 3 shows a close-up of the master board (Arduino Mega) and the white breadboard.  The breadboard has a power supply just like the one used on the laser component.  This powers a Bluetooth module (tall rectangular thingy) and powers the 3-axis compass (small square).  The Bluetooth module communicates between my Windows cell phone and the Mega board using a serial connection, but the compass uses I2C.  So the compass and the laser use the same I2C connection.

My Windows phone’s app was created using Microsoft Visual Studio.  There’s a plug-in for cell apps, so it’s pretty nice that I can use MS Studio to design my interface.

I bought a higher dollar GPS a few weeks ago from SparkFun.  The GPS uses WASS which is supposed to give me up to 2.5 meter accuracy.  My old GPS was giving me around 10 meters, so I stepped up and paid $40 to get better positioning.  Hope it works.  The GPS has a tiny JST plug, so I need to fool with the wire before I can hook it up.  The code is ready to receive the GSP coordinates, but I’m waiting on my shipment of Nanos so I can have a dedicated processor to retrieve the results.  I2C communication will also be used to connect the GPS Nano to the Mega board.

Once I get the GPS communicating, I’ll start on the sonar module.  Actually, the sonar module is complete, but it was previously connected to a PIC 16F628 chip.  I’m done with the PIC chips, so I’ll connect it to a Nano when they arrive.   I’ll send more pics as I get more sensors connected in the future.

And another update:

I’m currently rebuilding the laser sensor housing.  The previous pics I sent showed the sensors embedded in a 2×4, but that was just for testing.  My little robot wouldn’t be able to carry the heavy 2×4, so I rebuilt the housing using balsa wood.  I’ve included the pics of the balsa version and plan on hooking it up to the robot this week.  I send pics of the monster when I get it all connected.

Return top

Info

Section9 is a computer club based out of the Springfield Missouri area. For more information, please see the About Us page or follow us on Facebook.