Max Krebs

iOS is the new Linux
>>

Steve Jobs described the iPad as “The Future of Computing” when he introduced it in 2010. When it first launched, the speed at which sales were growing was outpacing even the iPhone. In the last few year; however, quarterly sales for iPads just keeps falling.

Apple posted some record breaking sales figures this quarter, but the iPad continued its downward trend, losing 22% since the last year. The iPhone for comparison, was up around 55 year over year.

This has continued the trend of hand-wringing over the state of the iPad. Some think it may not be the future of computing at all while some smart and dedicated nerds use iOS and iPads to do serious work.

As a software developer, I don’t think I will ever be able to use iOS professionally. I don’t think I would ever want to either. I love PCs too much and I don’t think Apple would ever allow terminal apps in the App Store or give me file-system access. I’ve even considered moving to Linux or Windows because of the neglect of the Mac. That is just me, but its a trend I am seeing outside of just nerds.

The future of computing is the smartphone. That should be pretty clear. For a lot of people, that is their primary, if not their only, computer. So where doest that leave the iPad and the Mac? Or more generally, the PC and the tablet markets?

I am not a business analyst, it just seems like professional iPad use is probably never going to take off. Most people use their iPads as reading or watching video.

Every year, we hear about “This Year will be the Year the iPad sales grow.” Or “This year there will be major productivity features for the iPad.” Or something along those lines. That sounds a lot like “This year will be the year of Linux on the desktop.”

Linux is a great server operating system the same way iOS is a great consumption operating system. And iOS is a poor productivity operating system the same way Linux is a poor desktop operating system. Some people are dedicated enough to invest the time into making it work for them, but the rest of us will just stick with the easiest and most productive option.

So what does that make the Mac in this metaphor? Linux : iOS as macOS : Windows?

Rails <3 Yarn
>>

Declaring My Biases

I don’t like JavaScript. Not very much anyway. Technically, it was the first real programming language that I learned (if you don’t count HTML as a programming language, which is both arguable and for another time) way back three or four years ago when I did some Code Academy front-end tutorials. I’ve written whole hybrid mobile applications in JavaScript. I had an entire course in University taught in Angular. But I thought it would be important to tell you here and up front that I don’t enjoy using or writing JavaScript.

I don’t have any particularly unique or interesting reasons behind this, just all the usual ones. I think server-side rendered HTML is still the best overall way to architect web apps. Plus the language itself is a pain, although I hear ES6 is going to make that all better (and this year is the year of Linux on the desktop).

In my own development work, I tend to stick to using JavaScript only when I think it makes sense: dynamic DOM edits, fancy form fields etc. I try to keep npm out of my Rails projects because that seems like way too much additional complexity. Instead, when I need to use a JavaScript package, I download the source and manually add into the /vendor path. This was a pain, but I was avoiding the problems of dependency hell, right?

Okay, Let’s Get Down to Business1

As part of my attempts at doing the [#100DaysOfCode] challenge, I am working on my podcast discovery site, in which users can add shows, hosts, and podcast networks to a database and browse and search to find related shows. Each model in the app has a description field of some kind and I wanted that description field to support Markdown text editing and rendering. The Markdown rendering is dead simple with the Redcarpet Gem, but I was struggling to find a good WYSIWYG text editor plugin that supported Markdown.

I really liked SimpleMDE, but it had a series of dependencies so it required a JavaScript package manger like npm or bower to install. I had never even tried to mix npm and Rails, although I thought it would be theoretically possible. Possible or not, I was dreaded having to figure it out. Then I remembered that Facebook recently released an interface for the npm package repository called yarn. It made quite a splash when it shipped, and I noticed that Rails was moving in a distinctly direction. I thought it would be worth a shot. If its good enough for DHH, then it is Damn Well Good Enough for me.

It’s a Tight Knit System2

I really only found one other guide for setting up yarn with rails. Its a post by Jiazhen Xie on sheerdevelopment.com. I followed along with that post and modified it a bit, but its still a good introduction (Its definitely got less rambling and puns than this post).

Installing yarn is about as easy as any other software tool. Mac users can run brew install yarn, Windows people have an installer that I can only assume will run an installation wizard, and Linux folks know what they are doing.

After this, I added the yarn directory to my path by putting

export PATH="$PATH:`yarn global bin`"

to the bottom of my .zshrc file.

If yarn --version returns successfully, then you are good to go.

That is it for setting up yarn on a global level. Next, cd into your Rails project (or create a new one) and run yarn init. This will give you some prompts for things like your email address, the project name, and version number. Don’t panic and starting randomly hitting keys like I did when I first came across this. Each option has a default value, so if you can’t think of an original project name, just hit enter until the execution finishes. This will create a package.json files in the root project directory.

Next, to be in alignment with the Rails defaults going forward, I added a node_modules into the vendor folder to hold packages.

% mkdir vendor/node_modules

And then add the new folder to the asset path so any modules installed there will be added to the asset pipeline by adding this line into config/initializers/assets.rb:

Rails.application.config.assets.paths << Rails.root.join('vendor', 'node_modules')

Now you are ready to install some packages. For me, I needed SimpleMDE for the description text fields so I ran

% yarn add simplemde --modules-folder ./vendor/node_modules

Even though yarn was never mentioned in the SimpleMDE readme or in the installation instructions anywhere, because the library is on npm, it installed flawlessly along with all of its dependencies.

Then require the codebase in your manifest file:

# app/assets/stylesheets/application.scss
*= require path/to/code
# or
@import 'path/to/code'

# app/assets/javascripts/application.js
//= path/to/code

In this example, all I needed to do after that is use the jQuery function that converts the text field to the Markdown editor, and that is that.

Aftermath

This was so much more straightforward than I anticipated it being. It was actually, dare I say, enjoyable. The biggest pain points for me was 1) not expecting my package manager to ask me questions and 2) making Rails play nice with yarn. And even then, it was a pretty easily fixed issue with the asset pipeline.

I couldn’t believe how quickly I could get this all implemented. It was some great instant gratification seeing the beautiful Markdown editor show up on my site. I can’t imagine how much of a pain it would have been without yarn.

Honestly, I am glad that Rails is moving to make yarn a default. I think the more you can make it easier for developers to include third-party packages, the less JavaScript people actually have to write. And that, I think, is a beautiful thing.


  1. to defeat the Huns [return]
  2. my partner is a knitter, I am so sorry [return]

REST APIs with Salesforce
>>

This is going to be diverting a little bit from the normal area of programming that I cover. At my joby-job, we do most of our work in developing on the Salesforce platform. Normally, Salesforce is used for really boring CRM and marketing, but our clients come to us with problems and we solve them. Tangentially, I’ve been thinking a lot about my relationship with Salesforce as a platform, and I think that there are some ways that the incredible restrictions Salesforce places on its developers has been making me better at writing good software, but more on that soon I think.

For right now, I wanted to go into detail about a particular type of solution that I’ve been working on in the last couple of months, and that is writing REST APIs for Salesforce.

The use case is this: a client has an existing Salesforce database that they are using for pretty straightforward contact management, but they want to integrate data from an external data source. The primary example I’ve been working with lately is that a client has a form on their website that they want to feed into their Salesforce database, but with some domain specific logic involved. there are ways to send data into Salesforce from elsewhere on the web, but I find that they are mostly too restrictive and don’t let you apply custom logic in an intuitive and scalable way.

Thus enters the Apex Rest framework. Previously, I’ve made most of my REST APIs in Rails, and I would still prefer to work in that framework and language (although I am becoming increasingly interested in both Sinatra and Elixir). However, after getting over the initial learning curve, I can bust out a pretty solid Apex REST API for sending data into and out of Salesforce.

The first step to setting up an Apex class for access via REST is to create a global class with sharing, and the @RESTRESOURCE class annotation.

@RestResource(urlMapping='/Project/*')
global with sharing class Project {

}

The urlMapping part of the class annotation denotes the, you guessed it, URL used to access the class.

That is all the class level setup that is required to register it for access through REST. There might be some other configuration needed to make your new REST API fully functional such as registering a connected app and setting up user profiles, but that is outside of the code and beyond the scope of this post.

The next step is defining methods to handle the various types of HTTP requests the API will be sent. This can be done by defining a method with the appropriate annotation that matches the request type. You can name the method however you like, but I find it helpful to stick with a convention, and since Apex is roughly based on Java, I find it helpful to stick with the Java servlet conventions of doGet, doPost etc.

@HttpPost
global static void doPost(String name, Date dueDate, String Description) {
  ...
}

If you are used to how Rails treats controller methods, the parameters in the function definition may stand out to you (they are extremely Java-y), but that is actually how you define what JSON payload you are expecting. For the method definition above, the expected JSON payload would be:

{
  "name": "projectName",
  "dueDate": "projectDueDate",
  "Description": "projectDescription"
}

Salesforce’s JSON parser takes the first depth level of the JSON payload and passes that into the parameters of the HTTP method. This is sufficient to handle most requirements, but things get a little hairier if you need to send nested JSON, but it is possible.

Take for example this payload:

{
  "name": "projectName",
  "dueDate": "projectDueDate",
  "description": "projectDescription",
  "developer" : {
    "firstName": "developerFirstName",
    "lastName": "developerLastName"
  }
}

If you try and send that payload to the above function, Salesforce is going to send you back a JSON parsing error before you even hit the class method. To address nested JSON, you have to include an Apex inner class to represent that nested object.

@HttpPost
global static void doPost(String name, Date dueDate, String Description, Developer developer) {
  ...
}

global class Developer {
  public String firstName {get;set;}
  public String lastName {get;set;}
}

You can then access the nested JSON data like any other object. Again, its fairly unintuitive, but it makes its own kind of weird sense once you get used to it.

After that, you can execute any other code in the method body that you need to and whatever the method returns will be sent as the response to the HTTP request. You can do this explicitly by return 'Successfully Created Project' or by setting the response body in the Apex RestContext method.

String responsebody = '{' +
                      '"id" : "'+project.Id+'",' +
                      '"success" : true,' +
                      '"errors" : [ ]' +
                       '}';
RestContext.response.addHeader('Content-Type', 'application/json');
RestContext.response.responseBody = Blob.valueOf(responsebody);

This will send back whatever JSON you define. I like to send back the Id of any record created in the course of the method because that follows the conventions of the other Salesforce APIs.

That is all there is to it. There is some weird annotation syntax and the JSON parser is a little finicky, but there is power in being able to define your own way to send data in and out of a Salesforce database.

Ruby Bare Names
>>

I’ve been slowly making my way through The Ruby Programming Language book and I came across something interesting.

With Ruby variables, you can scope variable names by adding a prefix symbol to the beginning of the name. For example:

type = 'Local Variable'
TYPE = 'Constant'
$type = 'Global Variable'
@type = 'Instance Variable'
@@type = 'Class Variable'

In Ruby1, local variables that don’t have a prefix symbol (i.e. local variables) have a bit of peculiar behavior that leads to some interesting patterns. If the Ruby interpreter comes across a variable name, it checks for 1) a value assigned to that variable name, and if that fails it checks for 2) a method with that name.

def ambiguous_number
  2
end

ambiguous_number = 1
puts ambiguous_number

If you copy and paste that bit of code into a Ruby file and execute it, the output will be 1. However, this piece of code is different.

def ambiguous_number
  2
end

puts ambiguous_number

In most other programming languages (I think), you will get some variation of a NameError to tell you that ambiguous_number isn’t defined. In Ruby, the program will output 2.

This is, superficially, a nifty quirk of the language, but when you get into some deeper coding, there are some interesting patterns that use this quirk.

The first place I noticed this pattern was in Rails strong parameters. Before strong parameters were introduced in Rails 4, you could write your controller like this:

class UsersController < ActionController::Base
  def create
    @user = User.new(params[:user])
    ....
  end
end

That create method creates a new user from the :user parameters hash passed to the controller from the form in the view.

The strong parameters way of writing this controller is:

class UsersController < ActionController::Base

  def create
    @user = User.new(user_params)
    ....
  end

  private

  def user_params
    params.require(:user).permit(:name)
  end
end

If you look at this, you would think the create action is making a new User from the values in the local variable user_params, but you would be wrong. Look closely, and you will notice the private method below called user_params. The Ruby interpreter sees the call to user_params and first looks for a local variable with that name, but when it doesn’t find one, it uses the return value of the method user_params. Handy.

I’ve heard this feature of Ruby described as “Bare Names.” The real value of bare names, that I’ve found so far, is within the context of the Extract Method refactoring pattern.

In short, extracting a method would entail taking functionality out of a method and creating a new method that contains that functionality in order to reduce the amount of responsibility of any one method. An easy way to do this is to use bare names to extract fairly complicated logic using local variables into their own methods.

This example is taken from a Rails app I’ve been building as just a learning/example app from an intermediate Rails series of screencasts. Its basically just a twitter clone. The controller below is for a FollowingRelationship model, which controls the follower/followee relationship between users.

class FollowingRelationshipsController < ApplicationController
  def create
    user = User.find(params[:user_id])
    current_user.followed_users << user
    redirect_to user
  end
end

Not the most complex controller, but its ripe for refactor.

class FollowingRelationshipsController < ApplicationController
  def create
    current_user.follow user
    redirect_to user, notice: "Now following user."
  end

  private

  def user
    @_user ||= User.find(params[:user_id])
  end
end

We extracted out the core functionality of one line, mainly setting a local variable from the parameters and put that in a method called user. This refactor uses the bare names functionality of Ruby, to extract a method out of a (albeit simple) overly complicated controller action.2

This was a delightful discovery. I’ve seen the bare words pattern used before (especially in the context of strong params) but it was exciting to find a concrete explanation for the behavior.


  1. I think this is a Ruby unique feature, I can’t think of any other language that handles variable invocation and assignment this way, but I am probably wrong. [return]
  2. The @_user syntax there is just a convention to signal that you should be using the method user and not the variable @user [return]

TIL: The Worst Part is the Waiting
>>

Today I learned the most annoyingly valuable lesson of all: Patience…and SSL

There are a few aspects of technology (web technology specifically) that freak me out once I run into them. One of the big ones is DNS1, although I am slowly becoming more comfortable with that.

Another (related) example is SSL. I always knew I needed to put SSL on everything these days, but it seemed expensive/complicated. Luckily, DNSimple (my DNS service of choice) recently announced a beta integration with Let’s Encrypt. Let’s Encrypt SSL certs are less flexible than other SSL options, but the ease of use makes up for that.

Heroku also recently changed the way they handled SSL on the hosting side, so I figured it was time to stop putting it off.

The setup was much easier than I anticipated:

1) Request SSL cert from DNSimple 2) Download said SSL cert to you machine 3) run heroku cert:add <cert path> to add the cert to heroku app 4) Either using DNSimple’s one click set up to Heroku SSL, or by manual entry, configure DNS records to point to the new Heroku SSL endpoint.

After that, its just waiting. And as I found out, the waiting is the most annoying part. Damn propagation. I spent the last hour nervously refreshing my site, worried I did something wrong, and hoping for the terrifying Google Chrome privacy warnings to go away.

Which, as you know, they finally did. Seriously, that was the part of the process with the most friction. There is no excuse to not have SSL on your site. Even a n00b like me can do it with enough patience.


  1. Also: OAuth [return]

Static Pages with High Voltage
>>

In the last couple of Rails apps that I’ve been working on, I’ve needed at least a couple of static pages; nothing fancy, just a landing page and an about page. This need lead to a common anti-pattern I was running into that involved creating a controller called HomeController that contained empty methods that were used for routing to the associated views.

# app/controllers/home_controller.rb
class HomeController < ApplicationController
  def home
  end

  def about
  end
end

With code, I know less is more, but this is a little much. Why even have this controller if it will be essentially empty?

As with most problems I run into, there is a gem for that.

High Voltage is a gem for easily serving static pages, and it does that one thing very well. After adding High Voltage to your Gemfile and running bundle all you need to do is create a app/views/pages directory and put your erb files there. Then your pages will be served with a /pages/:id route where the id is the filename. There are some more options and overrides you can make, but the one configuration that I do make is using the High Voltage router for route drawing.

# config/initializers/high_voltage.rb
HighVoltage.configure do |config|
  config.route_drawer = HighVoltage::RouteDrawers::Root
end

This allows an about page that would normally route through pages/about' to be available though/about`. I make sure to enable this because I am a neat freak and don’t like having to use the “pages” prefix for routing.

I wish I had known about this gem before, it would have saved me some pointlessly empty controllers.

TDD Really Works
>>

As I learn more about Rails and become more involved in the Ruby community, one of the universal truths that I am coming to realize is that we care greatly about writing tests. So I decided to force myself to do more test driven development, both at work and in my personal projects. I’ve been digging into Rspec and the various gems that go along with that including Capybara, Factory Girl and Faker (I might do a post about these later).

Since I am just starting out with a serious TDD practice, it can be a little hard to find the motivation to stick with it. I know for me, my initial reaction was to think, “Why am I wasting my time writing tests, I would rather just use the time to actually write the code.” A couple of weeks ago; however, I had an experience that started to make me realize the real value of TDD.

In an attempt to strengthen my Ruby chops and get better at code review (something I really need), I’ve been doing coding exercises from exercism.io, a community of programmers who complete challenges and then go through other solutions and offer constructive feedback. One of the exercises is called Raindrops, and is a variation on the Fizzbuzz problem. In the Raindrops problem you write a program that converts a number to a string, the contents of which depends on the number’s factors. If the number contains 3 as a factor, output ‘Pling’, if the number contains 5 as a factor, output ‘Plang’, if the number contains 7 as a factor, output ‘Plong’, and if the number does not contain 3, 5, or 7 as a factor, just pass the number’s digits straight through. For example:

Raindrops.convert 28 # => Plong
Raindrops.convert 30 # => PlingPlang
Raindrops.convert 34 # => 34

When you download exercism problems, they come with a test suite pre-written and the first time I tackled this challenge, I really struggled. My approach was to just write the code, and then test against all the test cases at once. For some reason, I could not get all the tests to pass. In some instances the code outputted the correct conversion, but other times it would just output nil and I had no idea why. I decided to just scrap it and start from scratch.

This time, I deliberately went one test case at a time writing out a solution that solved that case. Here is the code that came from that process.

class Raindrops
  def self.convert(number)
    response = ""
    if (number % 3 == 0)
      response << "Pling"
    end
    if (number % 5 == 0)
      response << "Plang"
    end
    if (number % 7 == 0)
      response << "Plong"
    end

    if response.empty?
      return number.to_s
    else
      return response
    end
  end

end

From a code elegance perspective, its not ideal. Its a little clunky, but all the test passed (!!) and you can sort of tell by the structure of the code that I went one test case at a time. The first test case was for a number divisible by 3, so that conditional went first, then 5 and so on as more complicated tests came up.

I submitted that solution and got some great feedback from other programmers which led me to my final solution.

class Raindrops
  def self.convert(number)
    sound = ""
    sound << "Pling" if (number % 3).zero?
    sound << "Plang" if (number % 5).zero?
    sound << "Plong" if (number % 7).zero?
    sound.empty? ?  number.to_s : sound
  end
end

That is much better. My path to this solution was classic “Red, Green, Refactor” and it was both satisfying and it gave me a practical example of how TDD is useful.

TIL: /usr/local/bin
>>

This is the first in a series of shorter blog posts I am going to start writing semi-regularly called TIL, in which I go into the smaller programming or operational tricks I pick up in the course of my day. Posts will be limited to 200 words. This was inspired by Hashrocket’s Today I Learned blog.

The sub-title for this one is, “Patrick States the (fairly) Obvious.”

Untill today, whenever I needed to save a short script or executable, I would put it in a folder called bin and then add an alias to my zshrc.local file that would call that command. For example:

alias achive_tweets="~/bin/archive_tweets"

Well today I learned that I can just drop any old executable into usr/local/bin and it will be automagically added to my path!! This was incredibly exciting for me to realize. Now I can throw my scripts into ~/bin, where they are checked in to version control, and then create a symlink between that file in ~/bin and /usr/local/bin. This is incredibly empowering for me, and it gets rid of the messy list of aliases in `zshrc.local.

tat
>>

tat

As I went into a little bit in my previous post, I have recently been taking another crack at working in tmux, after a less than stellar first crack at it a couple of months ago. One of my inital problems with tmux was that it was annoying to try and keep track of what sessions I had open and in which directories. The default naming scheme for sessions is just to assign each one a number unless you manually rename it.

So I would have to go through the steps each time of 1) opening Terminal 2) running “tmux ls” to see if I had any existing sesions open then (this is if I remember to) then having to either 3) run “tmux a -t ”, which normally takes a couple of attempts because I can’t remember the order of arguments, or I would get the name wrong, or 4) I would start a new session (which only adds to the mess) and remember to either 5) set the name when I created the session or 5) set the name once the session attached.

Phew.

Luckily, Thoughtbot and thier incredible repo dotfiles has a solution for this exact problem. Included in the bin directory, is a shell script called tat. When you run tat, the script will “Attach or create tmux session named the same as current directory.” This is great. Now, if you want to attach to a tmux session, all you do is run one command “tat” and either you pick up where you left off in an existing sesion, or you get a new one, with the correct name and everything. Then using the session switcher (which I bind to s) you can easily switch between all the directories you have open.

The last piece of the puzzle is that I added a small script to my zshrc.local file to make sure that whenever I open a new shell, if I am not currently attached to a tmux session, tat will get called.

_not_inside_tmux() { [[ -x "$TMUX" ]] }

ensure_tmux_is_running() {
  if _not_inside_tmux; then
    tat
  fi
}

ensure_tmux_is_running

And that is that. Whenever I open a new shell, I get thrown into a tmux session right away. This effectively eliminates the friction I was feeling with creating and managing tmux sessions.

Vim-Tmux Runner
>>

Vim-Tmux Runner

As the kind of person who is always looking for new ways to expand my development skills, I recently signed up for Thoughtbot’s Upcase service. Upcase is a “finishing school” for developers featuring screencasts, tutorials, exercises, and discussion forums. One of the first trail of videos I went through was a course on tmux by Chris Toomey.

I’ve experimented with tmux before, but I always thought it was a little too much for my needs; more trouble than its worth. But then I went through the tmux course on Upcase. One of the more interesting and useful things I got out of that course, was a vim plugin called Vim-Tmux-Runner.

Vim-Tmux Runner solves I problem I had with vim, while leveraging the power of tmux panes. It defines some vimscript commands that, when run from within a vim session, it sends the current line to be evaluated in an attached tmux pane. Here are some of my most-used vim keybindings for Vim-Tmux-Runner.

<leader>osr :VtrOpenRunner {'orientation': 'h', 'percentage':50}
# Opens new runner pane vertically split from the current vim session

<leader>t :VtrSendFile
# Sends current file to be run in attached runner pane
# will run file with appropriate command e.g. ruby <file>

<leader>va :VtrAttachToPane
# Menually attachs to an existing tmux pane

<leader>sl :VtrSendLinesToRunner
# Sends current line to runner for evaluation

Its a small thing, but it makes rapid development much easier. Especially when writing tests. Its definitely one of the little efficiency that makes complicated tools like tmux and vim worth learning.