Max Krebs

A Sidebar to Gloat
>>

On February 28th, 2017, in a perfect representation of how this year is going, Amazon’s AWS S3 service fell over. Amazon described it as ‘Increased Error Rates’, although to me, that seemed to be understating the situation a bit.

S3 is Amazon’s cloud storage solution that most of the Internet uses to serve files and assets like images or PDFs. What happened was that some issue caused connections to S3’s US-east region to be incapable of sending or receiving connections. I would have though this meant maybe some images on some sites would be broken, but this outage illustrated the danger of putting all your technology eggs in one basket. One outage in one isolated service is an inconvenience, but the cascade as systems that are inextricably linked all fail is a major vulnerability in our web infrastructure. This reminded me very much of the mass DNS outage caused by a DDoS attack on one DNS provider.

As you know if you’ve been reading my series about migrating my hosting from Heroku to Linode, just the week before the S3 outage I moved this site, and (just the day before the outage) my podcast discovery site onto a Linux VPS. This comes as a giant relief to me because, despite the fact that this shouldn’t be the case, Heroku apps are down because of the S3 problems. My question is, why is a hosting provider’s app containers going down because a file storage server is down?

The bottom line is: if I hadn’t moved my hosting infrastructure from Heroku to a Linux VPS on Linode, all of my websites would be down right now and there would be nothing I could do about it.

I can’t gloat too much. For both sites I use S3 to serve images, I just happened to pick the Oregon region, which isn’t experiencing any outages. But now I will be looking for a better solution to hosts images that isn’t so prone to taking down the entire web.

Setting up a VPS on Heroku
>>

This is Part Two in my series of posts on moving from Heroku to Linode for web application hosting. You can read part one here.

Front Matter

The most intimidating part of setting up a self-managed hosting solution, for me, was figuring out exactly how to configure my server at an OS level. I am not a sys-admin (or at least I am not one full-time) and so it was scary thinking about managing a machine all from the command line. I had some experience with playing around with Ubuntu servers before for fun and for assignments in university, and I had fairly successfully setup hosting for a client’s Wordpress site on Linode before.

I am going to be upfront here: Linode’s documentation is inconsistent in quality. The most basic “Getting Started” and “Securing Your Server” guides are fine. They are easy to follow and descriptive, but the more you dig into guides for specific implementations, the quality takes a sharp decline. My experience is specifically with guides for setting up a Rails application, so you millage may vary. For example, the guides for setting up a basic LAMP stack I find to be better than the Rails stack. Because of this, after a certain point, I had to look elsewhere for meaningful instruction.

Ironically, I found that DigitalOcean, Linode’s main competition, has better generalized documentation for setting up servers and stacks. Because of this, I put together my own checklists based on a combination of Linode’s and DigitalOcean’s documentation. Those checklists are what I am basing these posts on, but I will link out to the relevant documentation as I go along in case you want more detail and/or want to fact check me.

Creating Your Server

I am going to assume that you have already gone through all the steps required to actually procure a Virtual Private Server on whatever hosting provider you prefer and have provisioned it with whatever flavor of Linux you are most comfortable with. I will be using Ubuntu 16.04 on Linode.

Getting Started

Read More

Once you server is created and the image is built, find your server’s IP address (in the Linode control panel it is under “Remote Access” and ssh in using the root password you created when deploying your server.

$ ssh root@ipaddr

The first thing you should do is make sure all your software is up to date by running:

% apt update && apt upgrade

Quick sidebar, if you are on an older version of Ubuntu, you will have to substitute apt with apt-get, since apt is not supported. Also, if you are using a different distribution, mentally replace apt with your package manager of choice.

Anyway, moving on, next you want to set the hostname for this machine. There are also two different ways to do this based on what version of Ubuntu you are running.

# < version 15
% echo "hostname" > /etc/hostname
% hostname -F /etc/hostname
# > 15
% hostnamectl set-hostname hostname

I am going to take one sidebar here and install zsh and vim, just to make my life easier.

% apt install vim zsh -y
% zsh

This is just a personal preference thing. Ubuntu ships with vi, but I am used to vim, and zsh is my shell of choice.

Now we want to add the hostname that we set before to the hosts file. Using your editor of choice (e.g. vim, nano, vi, emacs ect.) add this line to /etc/hosts.

ipaddr hostname.tld hostname

Replace ipaddr with your server’s IP Address, hostname is the hostname you just set before, and hostname.tld is the domain name you plan to use for your server. If you don’t know what it will be yet, just set it to hostname.com, you can always change it later.

The last step in the general setup is setting the timezone of your server.

% dpkg-reconfigure tzdata

Securing Your Server

Read More

This is an optional step, and I am not really sure how I feel about it myself, but you can configure Ubuntu to automatically update your packages by using unattended-upgrades.

% apt install unattended-upgrades -y

This will create a configuration file at /etc/apt/apt.conf.d/50unattended-upgrades. There are a lot of configuration options here, so I won’t go into detail about them, but you can find more info here.

Something you definitely want to do is create a non-root user. Everything we’ve done up until now has been as the root user, but for security purposes, we don’t want our web app, or any of the software we install in the next post, to be run as root. The user we create will also be used for deploying our Rails app using Capistrano for simplicity purposes. We also want them to be in the sudoers group, so they can run commands as the super user without compromising the security model.

% adduser user_name
% adduser user_name sudo

You will be prompted to set the Unix password for that user, so now you can login over ssh with ssh user_name@ipaddr and using the password you just set.

For this next step, you will want to run the next two commands from your local machine, so logout of your server using exit. I am also going to assume you are using a Mac (because that seems to be the most common machine in Rails development). We are going to create an ssh key and upload it to your new server, because it is more secure than logging in with a password, and because I get sick of constantly typing in passwords. Because they are better people than I am, Github has much better documentation around checking for existing ssh keys and generating them, and they even have guides for Windows.

For our purposes, we are going to use a package called ssh-copy-id to upload our ssh public key. I believe this comes packaged with some distros of Linux, but for macOS, you can install it using Homebrew.

$ brew update
$ brew install ssh-copy-id

If you know you already have generated an SSH key on your local machine, you can go ahead and skip this next step, as it will potentially overwrite existing keys.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
$ eval "$(ssh-agent -s)"
$ ssh-add -K ~/.ssh/id_rsa

This will walk you through the steps to adding an ssh key. It should be pretty straightforward, just go with the defaults and set a strong password.

Whether you had to create one just now, or you already had an existing SSH key, upload it to your Linode server using ssh-copy-id.

$ ssh-copy-id user_name@ipaddr

One more optional step that I like to take for convenience sake is to add your server to your SSH config file on your local machine. This will allow your to login without having to type out the entire ssh command every time. To do this, open ~/.ssh/config with your favorite text editor (you may have to create this file first) and add an entry that is formatted like this:

Host hostname
  HostName ipaddr
  User user_name

For example, the config entry for this website on my local machine looks like this.

Host sadrobot
  HostName 8.8.8.8
  User deploy

Go ahead and log in to your server with your new fancy non-root user. If you did both of those steps, you should be able to just run ssh hostname and log in successfully.

A couple of last security settings to configure. I always disable root log in over ssh to force me to use my non-root user. This can be done by setting PermitRootLogin no in /etc/ssh/sshd_config. That is a long file, so to find the relevant section, search for # Authentication. You can also prevent anyone from SSH’ing into your server using a password in the same file for extra security. The downside is that you can only log in from a machine that has your SSH key on it. So if you switch machines or overwrite that SSH key, you are SOL.

Regardless, restart the SSH service to apply the changes to the conf file.

% sudo service ssh restart

Fussy Customizations

These last couple of items are just customizations that I do to make life on the server a little more bearable and more like my development machine. Feel free to steal them or adapt them to suit your preferences.

Switch the default shell to zsh.

% chsh -s $(command -v zsh)

Set the zsh prompt to be more useful.

% echo "autoload -Uz promptinit" > .zshrc
% echo "promptinit" >> .zshrc
% echo "prompt redhat" >> .zshrc

Add a sanity alias for ls, because I hate the default ls behavior.

% echo "alias ls='ls -alph'" >> .zshrc

Alias vim so I don’t accidentally open vi when I meant to open vim.

% echo "alias vi='vim'" >> .zshrc

Reload zsh config.

% source .zshrc

Download and use my VimRC configuration. You might have to install git first for this one.

% git clone git@github.com:P-Krebs/vimrc.git ~/.vim_runtime
% sh ~/.vim_runtime/install_awesome_vimrc.sh

What’s next?

Phew. I am sure that was a lot. Hopefully that wasn’t too intimidating. I think this should leave you with some sensible defaults for hosting and basic security on your server. Of course, I could be way off base here, but if I am doing something wrong, I am sure the internet will tell me.

In the next part, we are going to actually do the interesting stuff: deploying your Rails app to your newly created server! Woo!

Moving from Heroku to Linode
>>

part one

Heroku is pretty great service. It takes all of the complexity of web infrastructure and reduces it down to a pretty simple interface. Deploying your app is as simple as pushing to a git repository and you don’t have to worry about provisioning users or databases or anything to do with the actual server. I am a firm believer that Heroku is a great choice for many people and that everyone from beginners to large companies can, and should use Heroku for their hosting. For me though, I think I am ready to break up with Heroku.

Its not you, its me (barf)

Nothing bad happened between me and Heroku. It would almost be easier if there was some catastrophic failure that pushed me from the platform. But we just want different things, Heroku and I. And I am not a fan of the people they have been hanging out with.

In 2010, Salesforce bought Heroku for $212 million in cash and the more I get to know Salesforce, the more I don’t really want their hands in my soup. And while its probably fine now, I don’t want to be forced to move and then scramble to figure out how to migrate my entire infrastructure. I want to get out of this relationship before it turns sour.

Also, this gives me a chance to expand my skill set and gives me more independence as a developer and less reliance on closed systems etc.

The Plan

So as of right now, I am going to be hosting all green field projects on Not-Heroku. On a new project at my Job-y Job, we couldn’t use Heroku since we needed a static IP, and out client barely wanted to pay us for the work, let alone pay for a Heroku Enterprise account (thanks Salesforce for that one) so I took the chance to go through the process of setting up a production Rails environment on a VPS. I went through the entire process four time all told. The first two being a total screw up that resulted in deleting the entire server and starting over. The third time was the charm, and the last run though was to get the process really squared away and to document my process, which I will know pass on to you.

I think there are going to be roughly three parts to this series. This is Part One, the context and introduction. Part Two will be setting up the Virtual Private Server and all the standard prep work. Part Three will be setting up and deploying Rails. My hope is that this will be the comprehensive series of articles that I would have wanted three weeks ago when I was figuring all this out. Hopefully it will be of some use. And if all goes according to plan, by the time this series is finished, this blog won’t be hosted on Heroku anymore.

What’s next?

iOS is the new Linux
>>

Steve Jobs described the iPad as “The Future of Computing” when he introduced it in 2010. When it first launched, the speed at which sales were growing was outpacing even the iPhone. In the last few year; however, quarterly sales for iPads just keeps falling.

Apple posted some record breaking sales figures this quarter, but the iPad continued its downward trend, losing 22% since the last year. The iPhone for comparison, was up around 55 year over year.

This has continued the trend of hand-wringing over the state of the iPad. Some think it may not be the future of computing at all while some smart and dedicated nerds use iOS and iPads to do serious work.

As a software developer, I don’t think I will ever be able to use iOS professionally. I don’t think I would ever want to either. I love PCs too much and I don’t think Apple would ever allow terminal apps in the App Store or give me file-system access. I’ve even considered moving to Linux or Windows because of the neglect of the Mac. That is just me, but its a trend I am seeing outside of just nerds.

The future of computing is the smartphone. That should be pretty clear. For a lot of people, that is their primary, if not their only, computer. So where doest that leave the iPad and the Mac? Or more generally, the PC and the tablet markets?

I am not a business analyst, it just seems like professional iPad use is probably never going to take off. Most people use their iPads as reading or watching video.

Every year, we hear about “This Year will be the Year the iPad sales grow.” Or “This year there will be major productivity features for the iPad.” Or something along those lines. That sounds a lot like “This year will be the year of Linux on the desktop.”

Linux is a great server operating system the same way iOS is a great consumption operating system. And iOS is a poor productivity operating system the same way Linux is a poor desktop operating system. Some people are dedicated enough to invest the time into making it work for them, but the rest of us will just stick with the easiest and most productive option.

So what does that make the Mac in this metaphor? Linux : iOS as macOS : Windows?

Rails <3 Yarn
>>

Declaring My Biases

I don’t like JavaScript. Not very much anyway. Technically, it was the first real programming language that I learned (if you don’t count HTML as a programming language, which is both arguable and for another time) way back three or four years ago when I did some Code Academy front-end tutorials. I’ve written whole hybrid mobile applications in JavaScript. I had an entire course in University taught in Angular. But I thought it would be important to tell you here and up front that I don’t enjoy using or writing JavaScript.

I don’t have any particularly unique or interesting reasons behind this, just all the usual ones. I think server-side rendered HTML is still the best overall way to architect web apps. Plus the language itself is a pain, although I hear ES6 is going to make that all better (and this year is the year of Linux on the desktop).

In my own development work, I tend to stick to using JavaScript only when I think it makes sense: dynamic DOM edits, fancy form fields etc. I try to keep npm out of my Rails projects because that seems like way too much additional complexity. Instead, when I need to use a JavaScript package, I download the source and manually add into the /vendor path. This was a pain, but I was avoiding the problems of dependency hell, right?

Okay, Let’s Get Down to Business1

As part of my attempts at doing the [#100DaysOfCode] challenge, I am working on my podcast discovery site, in which users can add shows, hosts, and podcast networks to a database and browse and search to find related shows. Each model in the app has a description field of some kind and I wanted that description field to support Markdown text editing and rendering. The Markdown rendering is dead simple with the Redcarpet Gem, but I was struggling to find a good WYSIWYG text editor plugin that supported Markdown.

I really liked SimpleMDE, but it had a series of dependencies so it required a JavaScript package manger like npm or bower to install. I had never even tried to mix npm and Rails, although I thought it would be theoretically possible. Possible or not, I was dreaded having to figure it out. Then I remembered that Facebook recently released an interface for the npm package repository called yarn. It made quite a splash when it shipped, and I noticed that Rails was moving in a distinctly direction. I thought it would be worth a shot. If its good enough for DHH, then it is Damn Well Good Enough for me.

It’s a Tight Knit System2

I really only found one other guide for setting up yarn with rails. Its a post by Jiazhen Xie on sheerdevelopment.com. I followed along with that post and modified it a bit, but its still a good introduction (Its definitely got less rambling and puns than this post).

Installing yarn is about as easy as any other software tool. Mac users can run brew install yarn, Windows people have an installer that I can only assume will run an installation wizard, and Linux folks know what they are doing.

After this, I added the yarn directory to my path by putting

export PATH="$PATH:`yarn global bin`"

to the bottom of my .zshrc file.

If yarn --version returns successfully, then you are good to go.

That is it for setting up yarn on a global level. Next, cd into your Rails project (or create a new one) and run yarn init. This will give you some prompts for things like your email address, the project name, and version number. Don’t panic and starting randomly hitting keys like I did when I first came across this. Each option has a default value, so if you can’t think of an original project name, just hit enter until the execution finishes. This will create a package.json files in the root project directory.

Next, to be in alignment with the Rails defaults going forward, I added a node_modules into the vendor folder to hold packages.

% mkdir vendor/node_modules

And then add the new folder to the asset path so any modules installed there will be added to the asset pipeline by adding this line into config/initializers/assets.rb:

Rails.application.config.assets.paths << Rails.root.join('vendor', 'node_modules')

Now you are ready to install some packages. For me, I needed SimpleMDE for the description text fields so I ran

% yarn add simplemde --modules-folder ./vendor/node_modules

Even though yarn was never mentioned in the SimpleMDE readme or in the installation instructions anywhere, because the library is on npm, it installed flawlessly along with all of its dependencies.

Then require the codebase in your manifest file:

# app/assets/stylesheets/application.scss
*= require path/to/code
# or
@import 'path/to/code'

# app/assets/javascripts/application.js
//= path/to/code

In this example, all I needed to do after that is use the jQuery function that converts the text field to the Markdown editor, and that is that.

Aftermath

This was so much more straightforward than I anticipated it being. It was actually, dare I say, enjoyable. The biggest pain points for me was 1) not expecting my package manager to ask me questions and 2) making Rails play nice with yarn. And even then, it was a pretty easily fixed issue with the asset pipeline.

I couldn’t believe how quickly I could get this all implemented. It was some great instant gratification seeing the beautiful Markdown editor show up on my site. I can’t imagine how much of a pain it would have been without yarn.

Honestly, I am glad that Rails is moving to make yarn a default. I think the more you can make it easier for developers to include third-party packages, the less JavaScript people actually have to write. And that, I think, is a beautiful thing.


  1. to defeat the Huns [return]
  2. my partner is a knitter, I am so sorry [return]

REST APIs with Salesforce
>>

This is going to be diverting a little bit from the normal area of programming that I cover. At my joby-job, we do most of our work in developing on the Salesforce platform. Normally, Salesforce is used for really boring CRM and marketing, but our clients come to us with problems and we solve them. Tangentially, I’ve been thinking a lot about my relationship with Salesforce as a platform, and I think that there are some ways that the incredible restrictions Salesforce places on its developers has been making me better at writing good software, but more on that soon I think.

For right now, I wanted to go into detail about a particular type of solution that I’ve been working on in the last couple of months, and that is writing REST APIs for Salesforce.

The use case is this: a client has an existing Salesforce database that they are using for pretty straightforward contact management, but they want to integrate data from an external data source. The primary example I’ve been working with lately is that a client has a form on their website that they want to feed into their Salesforce database, but with some domain specific logic involved. there are ways to send data into Salesforce from elsewhere on the web, but I find that they are mostly too restrictive and don’t let you apply custom logic in an intuitive and scalable way.

Thus enters the Apex Rest framework. Previously, I’ve made most of my REST APIs in Rails, and I would still prefer to work in that framework and language (although I am becoming increasingly interested in both Sinatra and Elixir). However, after getting over the initial learning curve, I can bust out a pretty solid Apex REST API for sending data into and out of Salesforce.

The first step to setting up an Apex class for access via REST is to create a global class with sharing, and the @RESTRESOURCE class annotation.

@RestResource(urlMapping='/Project/*')
global with sharing class Project {

}

The urlMapping part of the class annotation denotes the, you guessed it, URL used to access the class.

That is all the class level setup that is required to register it for access through REST. There might be some other configuration needed to make your new REST API fully functional such as registering a connected app and setting up user profiles, but that is outside of the code and beyond the scope of this post.

The next step is defining methods to handle the various types of HTTP requests the API will be sent. This can be done by defining a method with the appropriate annotation that matches the request type. You can name the method however you like, but I find it helpful to stick with a convention, and since Apex is roughly based on Java, I find it helpful to stick with the Java servlet conventions of doGet, doPost etc.

@HttpPost
global static void doPost(String name, Date dueDate, String Description) {
  ...
}

If you are used to how Rails treats controller methods, the parameters in the function definition may stand out to you (they are extremely Java-y), but that is actually how you define what JSON payload you are expecting. For the method definition above, the expected JSON payload would be:

{
  "name": "projectName",
  "dueDate": "projectDueDate",
  "Description": "projectDescription"
}

Salesforce’s JSON parser takes the first depth level of the JSON payload and passes that into the parameters of the HTTP method. This is sufficient to handle most requirements, but things get a little hairier if you need to send nested JSON, but it is possible.

Take for example this payload:

{
  "name": "projectName",
  "dueDate": "projectDueDate",
  "description": "projectDescription",
  "developer" : {
    "firstName": "developerFirstName",
    "lastName": "developerLastName"
  }
}

If you try and send that payload to the above function, Salesforce is going to send you back a JSON parsing error before you even hit the class method. To address nested JSON, you have to include an Apex inner class to represent that nested object.

@HttpPost
global static void doPost(String name, Date dueDate, String Description, Developer developer) {
  ...
}

global class Developer {
  public String firstName {get;set;}
  public String lastName {get;set;}
}

You can then access the nested JSON data like any other object. Again, its fairly unintuitive, but it makes its own kind of weird sense once you get used to it.

After that, you can execute any other code in the method body that you need to and whatever the method returns will be sent as the response to the HTTP request. You can do this explicitly by return 'Successfully Created Project' or by setting the response body in the Apex RestContext method.

String responsebody = '{' +
                      '"id" : "'+project.Id+'",' +
                      '"success" : true,' +
                      '"errors" : [ ]' +
                       '}';
RestContext.response.addHeader('Content-Type', 'application/json');
RestContext.response.responseBody = Blob.valueOf(responsebody);

This will send back whatever JSON you define. I like to send back the Id of any record created in the course of the method because that follows the conventions of the other Salesforce APIs.

That is all there is to it. There is some weird annotation syntax and the JSON parser is a little finicky, but there is power in being able to define your own way to send data in and out of a Salesforce database.

Ruby Bare Names
>>

I’ve been slowly making my way through The Ruby Programming Language book and I came across something interesting.

With Ruby variables, you can scope variable names by adding a prefix symbol to the beginning of the name. For example:

type = 'Local Variable'
TYPE = 'Constant'
$type = 'Global Variable'
@type = 'Instance Variable'
@@type = 'Class Variable'

In Ruby1, local variables that don’t have a prefix symbol (i.e. local variables) have a bit of peculiar behavior that leads to some interesting patterns. If the Ruby interpreter comes across a variable name, it checks for 1) a value assigned to that variable name, and if that fails it checks for 2) a method with that name.

def ambiguous_number
  2
end

ambiguous_number = 1
puts ambiguous_number

If you copy and paste that bit of code into a Ruby file and execute it, the output will be 1. However, this piece of code is different.

def ambiguous_number
  2
end

puts ambiguous_number

In most other programming languages (I think), you will get some variation of a NameError to tell you that ambiguous_number isn’t defined. In Ruby, the program will output 2.

This is, superficially, a nifty quirk of the language, but when you get into some deeper coding, there are some interesting patterns that use this quirk.

The first place I noticed this pattern was in Rails strong parameters. Before strong parameters were introduced in Rails 4, you could write your controller like this:

class UsersController < ActionController::Base
  def create
    @user = User.new(params[:user])
    ....
  end
end

That create method creates a new user from the :user parameters hash passed to the controller from the form in the view.

The strong parameters way of writing this controller is:

class UsersController < ActionController::Base

  def create
    @user = User.new(user_params)
    ....
  end

  private

  def user_params
    params.require(:user).permit(:name)
  end
end

If you look at this, you would think the create action is making a new User from the values in the local variable user_params, but you would be wrong. Look closely, and you will notice the private method below called user_params. The Ruby interpreter sees the call to user_params and first looks for a local variable with that name, but when it doesn’t find one, it uses the return value of the method user_params. Handy.

I’ve heard this feature of Ruby described as “Bare Names.” The real value of bare names, that I’ve found so far, is within the context of the Extract Method refactoring pattern.

In short, extracting a method would entail taking functionality out of a method and creating a new method that contains that functionality in order to reduce the amount of responsibility of any one method. An easy way to do this is to use bare names to extract fairly complicated logic using local variables into their own methods.

This example is taken from a Rails app I’ve been building as just a learning/example app from an intermediate Rails series of screencasts. Its basically just a twitter clone. The controller below is for a FollowingRelationship model, which controls the follower/followee relationship between users.

class FollowingRelationshipsController < ApplicationController
  def create
    user = User.find(params[:user_id])
    current_user.followed_users << user
    redirect_to user
  end
end

Not the most complex controller, but its ripe for refactor.

class FollowingRelationshipsController < ApplicationController
  def create
    current_user.follow user
    redirect_to user, notice: "Now following user."
  end

  private

  def user
    @_user ||= User.find(params[:user_id])
  end
end

We extracted out the core functionality of one line, mainly setting a local variable from the parameters and put that in a method called user. This refactor uses the bare names functionality of Ruby, to extract a method out of a (albeit simple) overly complicated controller action.2

This was a delightful discovery. I’ve seen the bare words pattern used before (especially in the context of strong params) but it was exciting to find a concrete explanation for the behavior.


  1. I think this is a Ruby unique feature, I can’t think of any other language that handles variable invocation and assignment this way, but I am probably wrong. [return]
  2. The @_user syntax there is just a convention to signal that you should be using the method user and not the variable @user [return]

TIL: The Worst Part is the Waiting
>>

Today I learned the most annoyingly valuable lesson of all: Patience…and SSL

There are a few aspects of technology (web technology specifically) that freak me out once I run into them. One of the big ones is DNS1, although I am slowly becoming more comfortable with that.

Another (related) example is SSL. I always knew I needed to put SSL on everything these days, but it seemed expensive/complicated. Luckily, DNSimple (my DNS service of choice) recently announced a beta integration with Let’s Encrypt. Let’s Encrypt SSL certs are less flexible than other SSL options, but the ease of use makes up for that.

Heroku also recently changed the way they handled SSL on the hosting side, so I figured it was time to stop putting it off.

The setup was much easier than I anticipated:

1) Request SSL cert from DNSimple 2) Download said SSL cert to you machine 3) run heroku cert:add <cert path> to add the cert to heroku app 4) Either using DNSimple’s one click set up to Heroku SSL, or by manual entry, configure DNS records to point to the new Heroku SSL endpoint.

After that, its just waiting. And as I found out, the waiting is the most annoying part. Damn propagation. I spent the last hour nervously refreshing my site, worried I did something wrong, and hoping for the terrifying Google Chrome privacy warnings to go away.

Which, as you know, they finally did. Seriously, that was the part of the process with the most friction. There is no excuse to not have SSL on your site. Even a n00b like me can do it with enough patience.


  1. Also: OAuth [return]

Static Pages with High Voltage
>>

In the last couple of Rails apps that I’ve been working on, I’ve needed at least a couple of static pages; nothing fancy, just a landing page and an about page. This need lead to a common anti-pattern I was running into that involved creating a controller called HomeController that contained empty methods that were used for routing to the associated views.

# app/controllers/home_controller.rb
class HomeController < ApplicationController
  def home
  end

  def about
  end
end

With code, I know less is more, but this is a little much. Why even have this controller if it will be essentially empty?

As with most problems I run into, there is a gem for that.

High Voltage is a gem for easily serving static pages, and it does that one thing very well. After adding High Voltage to your Gemfile and running bundle all you need to do is create a app/views/pages directory and put your erb files there. Then your pages will be served with a /pages/:id route where the id is the filename. There are some more options and overrides you can make, but the one configuration that I do make is using the High Voltage router for route drawing.

# config/initializers/high_voltage.rb
HighVoltage.configure do |config|
  config.route_drawer = HighVoltage::RouteDrawers::Root
end

This allows an about page that would normally route through pages/about' to be available though/about`. I make sure to enable this because I am a neat freak and don’t like having to use the “pages” prefix for routing.

I wish I had known about this gem before, it would have saved me some pointlessly empty controllers.

TDD Really Works
>>

As I learn more about Rails and become more involved in the Ruby community, one of the universal truths that I am coming to realize is that we care greatly about writing tests. So I decided to force myself to do more test driven development, both at work and in my personal projects. I’ve been digging into Rspec and the various gems that go along with that including Capybara, Factory Girl and Faker (I might do a post about these later).

Since I am just starting out with a serious TDD practice, it can be a little hard to find the motivation to stick with it. I know for me, my initial reaction was to think, “Why am I wasting my time writing tests, I would rather just use the time to actually write the code.” A couple of weeks ago; however, I had an experience that started to make me realize the real value of TDD.

In an attempt to strengthen my Ruby chops and get better at code review (something I really need), I’ve been doing coding exercises from exercism.io, a community of programmers who complete challenges and then go through other solutions and offer constructive feedback. One of the exercises is called Raindrops, and is a variation on the Fizzbuzz problem. In the Raindrops problem you write a program that converts a number to a string, the contents of which depends on the number’s factors. If the number contains 3 as a factor, output ‘Pling’, if the number contains 5 as a factor, output ‘Plang’, if the number contains 7 as a factor, output ‘Plong’, and if the number does not contain 3, 5, or 7 as a factor, just pass the number’s digits straight through. For example:

Raindrops.convert 28 # => Plong
Raindrops.convert 30 # => PlingPlang
Raindrops.convert 34 # => 34

When you download exercism problems, they come with a test suite pre-written and the first time I tackled this challenge, I really struggled. My approach was to just write the code, and then test against all the test cases at once. For some reason, I could not get all the tests to pass. In some instances the code outputted the correct conversion, but other times it would just output nil and I had no idea why. I decided to just scrap it and start from scratch.

This time, I deliberately went one test case at a time writing out a solution that solved that case. Here is the code that came from that process.

class Raindrops
  def self.convert(number)
    response = ""
    if (number % 3 == 0)
      response << "Pling"
    end
    if (number % 5 == 0)
      response << "Plang"
    end
    if (number % 7 == 0)
      response << "Plong"
    end

    if response.empty?
      return number.to_s
    else
      return response
    end
  end

end

From a code elegance perspective, its not ideal. Its a little clunky, but all the test passed (!!) and you can sort of tell by the structure of the code that I went one test case at a time. The first test case was for a number divisible by 3, so that conditional went first, then 5 and so on as more complicated tests came up.

I submitted that solution and got some great feedback from other programmers which led me to my final solution.

class Raindrops
  def self.convert(number)
    sound = ""
    sound << "Pling" if (number % 3).zero?
    sound << "Plang" if (number % 5).zero?
    sound << "Plong" if (number % 7).zero?
    sound.empty? ?  number.to_s : sound
  end
end

That is much better. My path to this solution was classic “Red, Green, Refactor” and it was both satisfying and it gave me a practical example of how TDD is useful.