Max Krebs

Tumblr Deserved Better
>>

Its official. Verizon now owns Yahoo, but this isn’t a post about Yahoo. For all I care, Yahoo can rot in the dumpster that is Verizon’s digital strategy with AOL. But caught up in all this is Tumblr. I have a hard time putting my sadness about Tumblr into words, but I am going to try. All I ask is some patience. This is a rare showing of sentimentality for me.

It’s easy to make fun of Tumblr, the same way that adults find it easy to make fun of anything that is popular among teenagers. But Tumblr has gotten an especially bad reputation around the tech community. It seemed to fall out of “relevance” fairly quickly. There was a time when people tried out Tumblr for semi-serious blogging or writing, but now it has the reputation of being mostly teenagers with bad senses of social justice and unhealthy obsessions with British television shows. Oh, also porn.

True or not (I’d argue not, or at least oversimplified), Tumblr deserved better. Founded in 2007 by David Karp and CTO Marco Arment, bought by Yahoo in 2013, now sold off to Verizon. Mismanaged all the way. This narrative, which mostly centers around Tumblr’s viability as a startup and product. But this misses how important Tumblr was to the people who used it.

How many anonymous forums do teenagers get? Where they can be really themselves around people who are like them? With no fear of being outed or found out? Not a lot. And that is what Tumblr was to many kids. How many queer kids felt lonely until they joined Tumblr and found both people like them and the words to describe themselves? I would bet it was a large number. And now those kids might lose what could be their only safe space because of Silicon Valley politics. All Tumblr kids wanted to do was be themselves and ship fictional characters.

For me, it was a place to be unapologetically nerdy in a culture and a hometown that discouraged intellectualism and geekery. It was how I got to know all of my closest and best friends. I moved halfway across the country on my own. It was reassuring and comforting to share gifs with people I had just met and know we were cut from the same cloth. I’ll always treasure that time in my life. I met a long term partner through Tumblr. Now that I think about it, all of my major relationships started, at least in part, on or around Tumblr. More recently, its been a place I could experiment with my identity expression without fear.

Recently, bookmarking site Pinboard acquired bookmarking site Delicious mostly because Delicious was passed around by corporations (including Yahoo at one point) and was mismanaged into a pale imitation of its former self (sound familiar?), and Pinboard wanted to preserve Delicious as a piece of Web history. I don’t think anyone will extend that kindness to Tumblr. If I was rich; however, I would buy it without hesitation, and make sure the servers stayed up for a long time.

WWDC 2017
>>

TLDR: The tech was mostly great. Everything else, not so much.

I went into this WWDC 2017 keynote not sure where I stood on Apple and their products. I had already decided that my next computer will be a desktop gaming PC and I was fairly sure that my next laptop would be a cheap PC running Linux. I had even considered going Android for my next phone. Before this, I was almost certain.

Now, not so much.

The Presentation

I got this idea from Aleen Sims, but I kept a count of the diversity statistics on stage at the WWDC keynote. Of the people on stage with speaking roles, there were 10 white men, 2 white women, 1 woman of color (who was also pregnant). The last couple of Apple Keynotes seemed to be moving in generally the right direction with their diversity statistics. The Apple Music presentation featuring Bozoma Saint John was a particular highlight. But this year was a step back. In 2017, being the biggest company in the world and having double digit white guys on stage is unacceptable.

The few women who were on stage all did fantastic. One women from Epic Games was demoing VR Development on a Mac using the HTC Vive, and she performed a 1) live demo while 2) wearing a VR headset with 3) no script or teleprompter that required 4) to physically move around a set. That was seriously impressive. One of the presenters was also very pregnant. I don’t ever want to have children for a reason, I can’t imagine that was easy, but it was a great image that pregnant women are just as capable as anyone else. I just hope Apple treats its pregnant employees with respect behind the scenes.

Sigh. Okay, it seems that every Apple keynote these days has one part or another that is controversial. Last yeah it was Phil Schiller describing Apple removing the headphone jack on the iPhone 7 as “Courageous”. For me, that moment this year was a transphobic joke made by the (generally genuinely funny) Craig Federighi. While talking about iOS 11’s new Drag and Drop feature, Craig said, “Its a drag fest.” Cue laughter. I can’t tell if this was scripted or just something he ad libbed, but I don’t know which would be worse. I don’t think I need to explain why this hurt me as much as it did. I saw a couple other trans people on twitter describe it as having all your enthusiasm run headfirst into a brick wall. Can confirm, that was pretty much the feeling. Infuriatingly, this blatantly offensive “joke” will get none of the Apple press outrage that the courage line did. I wonder why.

This leads me to my final thought on the actual presentation. I am so tired of white, straight, middle aged dude sensibilities in tech. Dad jokes. Stupid video skits. etc. Its all so boring and dull and I am sick of it. Its also how a transphobic joke can go basically unnoticed in the community. Its so exhausting.

Mac Software

  • Mail / Photos: Not much here. ¯\_(ツ)_/¯
  • Safari: The privacy features for stopping autoplay video and fighting cross site tracking are fantastic. I use Chrome and some plugins to do the same thing, but making it the default for Safari will bring at least some level of protection to a much larger audience. Always a win. Also, I’ve heard that Safari when its updated will support WebAssembly, very nice.
  • Metal 2: I am a little disappointed that Apple is showing that they wont support Vulcan or other graphics standards that literally everyone else is using, but if they can get engine makers and developers on board (neither easy nor guaranteed) then it will come out in the wash anyway. I was also genuinely surprised there was so much emphasis on VR. I am skeptical of both VR in general and VR on the Mac, but time will tell.
  • APFS: ding.

Mac Hardware

  • iMac Pro: Wow. I know most of the impressiveness of this product is the large numbers that don’t really mean too much, but those large numbers are pretty damn impressive. And I am a sucker for Space Grey anything. I only wish I had a job that justified needing to use it.
  • MacBook updates: 👍
  • External GPUs: 👍
  • AMD GPUs: 👎
  • I want all the computers.

iOS

  • AR Kit: Cool. I’ve got some ideas on things I could do with this.
  • Too Much Machine Learning: Seriously, cool it with the buzzwords.
  • Nice for people who use iPads (not me)
  • I am not a fan of the new design language.

Miscellaneous

  • Pride Watch Band! I know corporate expressions of queer pride are problematic, but I really want a pride flag watch band.
  • Homepod: bad name. Not working with Spotify is a deal breaker
  • I wonder how much of this will work reliably
  • That is now how you pronounce Reykjavik, Craig
  • I had no interest in anything in the watchOS section. I don’t need useless watch faces and notifications to guilt me into working out more.

Wordpress
>>

This is probably not an original thought but Wordpress is awful.

I think its a relatively well-designed CMS that is really good at letting people make websites without too much thought or programming knowledge. But I have a number of major problems with Wordpress.

Security

The plugin and theme engine in WP allows for any arbitrary third-party code to be executed on your website (potentially on your server/PC). This is a giant, glaring hole in the security model, that Automattic has kind of taken steps to counteract. For example, using Wordpress.org’s hosting features for your site 1) is more secure and 2) you don’t have to manage your own server but then your site and content are trapped and held hostage by Wordpress.

Storing Configuration in the Database

Most websites and web apps these days are two things, a bunch of files (code. and a database. This generally makes collaboration between developers easy because the code files can be checked into a version control system like git and Github, and easy shared. The database is more complicated.

Take a Ruby on Rails application, for example. Two developers are working together and all the code is checked into Github. The way RoR is architected ensures that the data in the database is arbitrary and volatile. It doesn’t matter what is in the database, really. As long as the schema is consistent, it doesn’t matter if a user has one email or another. Rails solves this by including scripts and functions to take a saved state of the structure of a database and recreating it. Then there are seed files that can load data.

Wordpress is different. Wordpress stores site-wide settings in the database. Flip an option in the settings view or update a page, and that information is stored in the database. So two developers can’t just collaborate on a WP site together because the configuration changes are saved to the database. So you can’t just pass the schema back and forth between developers because you would be missing crucial configuration changes. Infuriating.

Single Table content structure

All of the actual content on a site are stored in one database table. This is idiotic to anyone who has taken a 101 course on database design.

Other People

The people who developer Wordpress themes and plugins are committed to twisting and changing Wordpress to do whatever they want it to with a disregard for what WP is inherently weak or strong. This leads to tedious and unmaintainable Frankenstein monsters of websites.

This was mostly prompted by having to work on way too many Wordpress sites at my day job. </rant>

SCOTT HANSELMAN: Ruby on Rails on Azure App Service (Web Sites) with Linux (and Ubuntu on Windows 10)
>>

This is a great post from Scott. I’ve only recently become familiar with his writing and excellent podcast Hanselminutes but he does great work.

The Linux subsystem for Windows was the start of my process of actually looking at Windows as a suitable operating system to do software development on, and if getting a Rails environment set up on windows is as easy as Scott makes it look, then I am sold.

I am not particularly attached to any of the OS level features of macOS, especially since rediscovering how good VS Code is. My recent push towards moving all my data to cloud services has made me more platform agnostic than ever (why hello Samsung Galaxy S8), but what I would really miss if I moved to Windows is the command line. Unix, and by extension Linux I guess, makes it so easy to do software development, that I was considering a dual setup of Windows for gaming and then a flavor of Linux for development.

Anyway, that is all probably best saved for another post. My point is, if you are a web developer who is annoyed at the slow progression of the Mac operating system and hardware, take a look at Scott’s post.

Is the iPad a Professional Machine?
>>

This is sort of a follow up to a post I wrote last month. I had this thought at like five in the morning as my cat was waking me up for breakfast. There has been a lot of strife in the Apple tech community about iPads and Macs and what “professional work” is and what the future of computing looks like. My opinion on this probably means very little. I’ve been an Apple fan for years, but I’ve been a planning on moving to Windows/Linux and Android recently. So take all this with a grain of salt.

As I mentioned in my “OS is the new Linux” post, there are plenty of people who run their professional lives on an iOS machine, mainly the iPad Pro. However, just because Apple markets the iPad as a “Pro” machine, and just because some people can do their professional work on an iPad, doesn’t mean that iPads are Professional computers, or even general purpose computers. For example, you cannot write and publish iOS apps (or web apps for that matter) on iOS. You must own a Mac if you want to make apps for Apple’s platforms. So by design, there are things a Mac can do that are not only difficult, but impossible on an iPad.

Is Apple motivated to change this? I don’t think so.

Lets ignore the fact the right now, there are hundreds of engineers within Apple that need tools to do their job that will probably never be available on iOS. I doubt the team designing Swift would be able to do their job without a command line. For the sake of argument, lets ignore internal Apple needs.

Say the magically overnight, you could do software development on an iPad, how many programmers would switch to using an iPad? Some probably would, especially if iPads are the future of computing. Say that iPads become truly a general purpose computer and that Apple stops making Macs altogether, that means that they just lost the product line with the highest average selling price by far. Maybe, maybe, Steve Jobs would have done this, cannibalized Apple’s highest ASP product for the sake of progress, but Tim Cook absolutely will not. Maybe I am putting too much stock in how much Apple cares about ASP, but even leaving that behind, Tim Cook has shown that he is willing to keep products around just because someone will buy them, hell look at the Mac Pro.

If you remove every other factor that differentiates iPads and Macs, and just look at iOS app development, it seems reasonable to me that Apple would want to keep app development as something that you can only do on the Mac if for no other reason than to continue to require developers to buy an extremely expensive Apple product. Maybe its in Apple’s best interest to keep the iPad a consumption only device.

Oh Bootstrap, I'll Never Leave You Again
>>

Oh Bootstrap, I’ll Never Leave You Again

I am not a designer. I don’t like designing front-ends for websites. It’s tedious and I am not good at it so I mostly just spend hours moving CSS values up and down in Chrome Dev Tools until it looks right to me. Oh wait, I have to also got through the same process again for each responsive breakpoint. No thanks.

Which is ironic, because I started at my job-y job as a front-end intern. But it’s not for me. I can write SQL queries happily all day, but please don’t ask me to touch your CSS. It will look bad and we will both be unhappy.

All this is to say that this is why I stick with front-end frameworks, because the less I have to design, the better. My framework of choice is Bootstrap. Its quick to get setup, has some sensible defaults, and looks pretty good. Unfortunately, the popularity of Bootstrap has contributed to the “Every Website Looks the Same” syndrome that the Internet has been experiencing over the last several years. Because of that, and because I wanted to really simplify my reliance on dependencies, I’ve tried to move away from Bootstrap where I can.

With my this site, that meant no front-end framework at all. I stole some CSS from a couple of other blogs I enjoy reading and tweaked it to fit what I was going for, which was simple and fairly information dense. The result of this was this site being unusable on a mobile device (see also: I am not a designer).

For my next big project, What Do They Look Like?, I wanted to try out Thoughtbot’s Suspenders gem to generate the template for the Rails app. This turned out to be a great idea in the short term, and terrible in the long term for a couple of reasons (one of which is trying to self-host a web app that is specifically configured for Heroku), one of which was that Suspenders generates your app bundled with the Bourbon front-end framework. I thought this would be fine. I wanted to try bout Bourbon anyway as a possible framework for sadrobot.software.

The problem was that, unlike Bootstrap which pre-defines all the styling for you into HTML classes, Bourbon was a mix-on library for CSS and SCSS. I thought I would be able to load Bourbon, and just add some classes and ids to my HTML and be golden, but I had to both define the SCSS styling and structure the HTML. So I pulled some example CSS from the Bourbon website and got the bare bones working. It looked good.

The main feature of What Do They Look Like is that the directory is basically just a grid of faces or images with a name below it; easy to navigate, search or just look at. Trying to get a grid of faces and names working in Bourbon without spacing issues was…a problem. If an image was too large or a name was too long, there would be clipping. That was not okay for me. Forcing my images to be square was no problem, but I could not compromise on people’s names being cut off, so I got frustrated and ripped the Bourbon grid out and replaced it with Bootstrap.

I hate feeling like I am hacking and breaking CSS, I hate feeling like I am using !important too much. I can write well-designed Ruby code, but I find it impossible to write well designed CSS. With yarn and a little bit of fussing with import statements, I had my grid working again. It was responsive with big images and clear text. And it didn’t clip anyone’s name off. And I didn’t have to hack CSS to do it.

I love you Bootstrap. I will never wander again.

Deploying Rails to a Linux VPS
>>

This is part three in my series on hosting a Rails application on a Linux Virtual Private Server. You can read part one here and part two here.

It’s About Time

Finally we are going to talk about what you actually do once you have a fancy new Linux server up and running. We are going to put our website on the Internet!

The stack that I use is Nginx with a Puma proxy for the web server, Postgres for the database, and rbenv to manage the Ruby version both on my local machine and on the server (RVM people, don’t email me). Finally we are going to configure out Rails app to use Capistrano for deployment.

Installing your Web Server

Every guide I’ve ever read has the first step of doing anything as updating your package manager so in that great tradition:

% sudo apt update && sudo apt upgrade

First things first, we will need to install Nginx and some other dependencies for later. You might have already installed git, but if not, make sure you do now.

% sudo apt install curl git-core nginx -y

Installing Your Database

Next we are going to install Postgres (I refuse to call it PostgreSQL). We also need a user that our application is going to use to connect to the database when it runs. Conventionally, that user has the same name as the application. I like to create my database at this point too, just because I am going to be in the psql interface and I would probably forget if I didn’t get it over with.

% sudo apt-get install postgresql make postgresql-contrib libpq-dev -y

Along with Postgres, we are installing make, the GNU collector of executables, and libpq, which is the C library for interfacing with Postgres (you will not have to worry about this).

Like I said before, we also need to create a user (or role) in the postgres database for our application. This command is a little hairy.

% sudo -u postgres createuser -s user_name

sudo -u postgres means “run the following command as the postgres user.” postgres is the default superuser account for the database and so we want to use that user to run commands against the database until we have our own application user. Replace user_name with the name of your application, or whatever user name you want to use to connect your app to your database.

This is all boring database admin, but hang in there, we will get to the fun stuff in a second. Next, we have to log in to the database interface to set the password for our newly created database user.

Much like our previous command, we need to use the postgres user to access the psql interface to the PostgreSQL database (confusing).

% sudo -u postgres psql

You will never guess what the command for resetting a user’s password is.

posgres=# \password user_name

Pick a strong password, and make sure you store it somewhere secure like a password manager because you will need to at least one more time.

While you are logged in, create your production database. The convention here is appname_environment.

postgres=# CREATE DATABASE appname_production;

Then you can exit psql. This is not super obvious at first because while most command line interfaces use some form of exit to quit, psql uses

postgres=# \q

Yay databases! If you have an existing database the you would have dumped and would like to load the data from, you can run this command:

% pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myuser -d mydb latest.dump

But if you are setting up an app from scratch or deploying for the first time, no need to worry about that just yet.

Installing Ruby <3

Finally! The relatively fun stuff. Before we can get our application up on the server, we need the Ruby programming language, and a way to manage what version of ruby is installed. If you are using a environment manager for Ruby on your development machine (which you should be) you can just use that one for your server. Like I said before, I use rbenv so if you use RVM, the following commands will be different and you will have to sub out rbenv for RVM in future commands or configuration.

There are some more packages that need to be installed first. I wont go over what each of them does, but they are all needed down the line.

 %  sudo apt-get install make build-essential nodejs libssl-dev zlib1g-dev libreadline-dev libyaml-dev -y

Luckily for us, there is a rbenv installation script we can run to get up and running fairly quickly.

% curl -fsSL https://github.com/rbenv/rbenv-installer/raw/master/bin/rbenv-installer | bash

That will probably end by giving you a message about how you need to put rbenv in your path variable. Down the line, its also going to ask you to add some more settings into your path. To save time, just add them all now. Run the following commands to add the Path config to the rc file for whichever shell you are using (the default is bash so use .bashrc, but I use zsh).

% echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.zshrc
% rbenv init
% echo 'eval "$(rbenv init -)"' >> ~/.zshrc
% source .zshrc

Along with that installer script for rbenv, there is also a rbenv doctor script that just runs the checks to make sure that everything is setup correctly. We can run this with

% curl -fsSL https://github.com/rbenv/rbenv-installer/raw/master/bin/rbenv-doctor | bash

If there is anything missing, that script will tell you the command you need to run. It will say there are no versions of ruby installed, so lets take care of that.

 % rbenv install 2.3.1
 % rbenv rehash
 % rbenv global 2.3.1
 % ruby -v

The first command will install Ruby version 2.3.1 (replace this with whichever version is most recent) and then next two will tell rbenv to use the version of Ruby you just installed as the global Ruby version. If all goes according to plan, the output from ruby -v should be 2.3.1.

Install Rails and Bundler

Rails and bundler are the two gems you will need to get a basic app up and running, so you want to install them now.

% gem install rails -V --no-ri --no-rdoc
% gem install bundler -V --no-ri --no-rdoc

The -V flag is verbose output, --no-ri and --no-rdoc prevent any documentation from being downloaded so that the gem takes up as little space on disk as possible.

Okay! I think that is everything for getting the server prepped for our Rails application. Next we are going to go through the configuration steps for getting a Rails app deployed using Capistrano.

A Sidebar to Gloat
>>

On February 28th, 2017, in a perfect representation of how this year is going, Amazon’s AWS S3 service fell over. Amazon described it as ‘Increased Error Rates’, although to me, that seemed to be understating the situation a bit.

S3 is Amazon’s cloud storage solution that most of the Internet uses to serve files and assets like images or PDFs. What happened was that some issue caused connections to S3’s US-east region to be incapable of sending or receiving connections. I would have though this meant maybe some images on some sites would be broken, but this outage illustrated the danger of putting all your technology eggs in one basket. One outage in one isolated service is an inconvenience, but the cascade as systems that are inextricably linked all fail is a major vulnerability in our web infrastructure. This reminded me very much of the mass DNS outage caused by a DDoS attack on one DNS provider.

As you know if you’ve been reading my series about migrating my hosting from Heroku to Linode, just the week before the S3 outage I moved this site, and (just the day before the outage) my podcast discovery site onto a Linux VPS. This comes as a giant relief to me because, despite the fact that this shouldn’t be the case, Heroku apps are down because of the S3 problems. My question is, why is a hosting provider’s app containers going down because a file storage server is down?

The bottom line is: if I hadn’t moved my hosting infrastructure from Heroku to a Linux VPS on Linode, all of my websites would be down right now and there would be nothing I could do about it.

I can’t gloat too much. For both sites I use S3 to serve images, I just happened to pick the Oregon region, which isn’t experiencing any outages. But now I will be looking for a better solution to hosts images that isn’t so prone to taking down the entire web.

Setting up a VPS on Heroku
>>

This is Part Two in my series of posts on moving from Heroku to Linode for web application hosting. You can read part one here.

Front Matter

The most intimidating part of setting up a self-managed hosting solution, for me, was figuring out exactly how to configure my server at an OS level. I am not a sys-admin (or at least I am not one full-time) and so it was scary thinking about managing a machine all from the command line. I had some experience with playing around with Ubuntu servers before for fun and for assignments in university, and I had fairly successfully setup hosting for a client’s Wordpress site on Linode before.

I am going to be upfront here: Linode’s documentation is inconsistent in quality. The most basic “Getting Started” and “Securing Your Server” guides are fine. They are easy to follow and descriptive, but the more you dig into guides for specific implementations, the quality takes a sharp decline. My experience is specifically with guides for setting up a Rails application, so you millage may vary. For example, the guides for setting up a basic LAMP stack I find to be better than the Rails stack. Because of this, after a certain point, I had to look elsewhere for meaningful instruction.

Ironically, I found that DigitalOcean, Linode’s main competition, has better generalized documentation for setting up servers and stacks. Because of this, I put together my own checklists based on a combination of Linode’s and DigitalOcean’s documentation. Those checklists are what I am basing these posts on, but I will link out to the relevant documentation as I go along in case you want more detail and/or want to fact check me.

Creating Your Server

I am going to assume that you have already gone through all the steps required to actually procure a Virtual Private Server on whatever hosting provider you prefer and have provisioned it with whatever flavor of Linux you are most comfortable with. I will be using Ubuntu 16.04 on Linode.

Getting Started

Read More

Once you server is created and the image is built, find your server’s IP address (in the Linode control panel it is under “Remote Access” and ssh in using the root password you created when deploying your server.

$ ssh root@ipaddr

The first thing you should do is make sure all your software is up to date by running:

% apt update && apt upgrade

Quick sidebar, if you are on an older version of Ubuntu, you will have to substitute apt with apt-get, since apt is not supported. Also, if you are using a different distribution, mentally replace apt with your package manager of choice.

Anyway, moving on, next you want to set the hostname for this machine. There are also two different ways to do this based on what version of Ubuntu you are running.

# < version 15
% echo "hostname" > /etc/hostname
% hostname -F /etc/hostname
# > 15
% hostnamectl set-hostname hostname

I am going to take one sidebar here and install zsh and vim, just to make my life easier.

% apt install vim zsh -y
% zsh

This is just a personal preference thing. Ubuntu ships with vi, but I am used to vim, and zsh is my shell of choice.

Now we want to add the hostname that we set before to the hosts file. Using your editor of choice (e.g. vim, nano, vi, emacs ect.) add this line to /etc/hosts.

ipaddr hostname.tld hostname

Replace ipaddr with your server’s IP Address, hostname is the hostname you just set before, and hostname.tld is the domain name you plan to use for your server. If you don’t know what it will be yet, just set it to hostname.com, you can always change it later.

The last step in the general setup is setting the timezone of your server.

% dpkg-reconfigure tzdata

Securing Your Server

Read More

This is an optional step, and I am not really sure how I feel about it myself, but you can configure Ubuntu to automatically update your packages by using unattended-upgrades.

% apt install unattended-upgrades -y

This will create a configuration file at /etc/apt/apt.conf.d/50unattended-upgrades. There are a lot of configuration options here, so I won’t go into detail about them, but you can find more info here.

Something you definitely want to do is create a non-root user. Everything we’ve done up until now has been as the root user, but for security purposes, we don’t want our web app, or any of the software we install in the next post, to be run as root. The user we create will also be used for deploying our Rails app using Capistrano for simplicity purposes. We also want them to be in the sudoers group, so they can run commands as the super user without compromising the security model.

% adduser user_name
% adduser user_name sudo

You will be prompted to set the Unix password for that user, so now you can login over ssh with ssh user_name@ipaddr and using the password you just set.

For this next step, you will want to run the next two commands from your local machine, so logout of your server using exit. I am also going to assume you are using a Mac (because that seems to be the most common machine in Rails development). We are going to create an ssh key and upload it to your new server, because it is more secure than logging in with a password, and because I get sick of constantly typing in passwords. Because they are better people than I am, Github has much better documentation around checking for existing ssh keys and generating them, and they even have guides for Windows.

For our purposes, we are going to use a package called ssh-copy-id to upload our ssh public key. I believe this comes packaged with some distros of Linux, but for macOS, you can install it using Homebrew.

$ brew update
$ brew install ssh-copy-id

If you know you already have generated an SSH key on your local machine, you can go ahead and skip this next step, as it will potentially overwrite existing keys.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
$ eval "$(ssh-agent -s)"
$ ssh-add -K ~/.ssh/id_rsa

This will walk you through the steps to adding an ssh key. It should be pretty straightforward, just go with the defaults and set a strong password.

Whether you had to create one just now, or you already had an existing SSH key, upload it to your Linode server using ssh-copy-id.

$ ssh-copy-id user_name@ipaddr

One more optional step that I like to take for convenience sake is to add your server to your SSH config file on your local machine. This will allow your to login without having to type out the entire ssh command every time. To do this, open ~/.ssh/config with your favorite text editor (you may have to create this file first) and add an entry that is formatted like this:

Host hostname
  HostName ipaddr
  User user_name

For example, the config entry for this website on my local machine looks like this.

Host sadrobot
  HostName 8.8.8.8
  User deploy

Go ahead and log in to your server with your new fancy non-root user. If you did both of those steps, you should be able to just run ssh hostname and log in successfully.

A couple of last security settings to configure. I always disable root log in over ssh to force me to use my non-root user. This can be done by setting PermitRootLogin no in /etc/ssh/sshd_config. That is a long file, so to find the relevant section, search for # Authentication. You can also prevent anyone from SSH’ing into your server using a password in the same file for extra security. The downside is that you can only log in from a machine that has your SSH key on it. So if you switch machines or overwrite that SSH key, you are SOL.

Regardless, restart the SSH service to apply the changes to the conf file.

% sudo service ssh restart

Fussy Customizations

These last couple of items are just customizations that I do to make life on the server a little more bearable and more like my development machine. Feel free to steal them or adapt them to suit your preferences.

Switch the default shell to zsh.

% chsh -s $(command -v zsh)

Set the zsh prompt to be more useful.

% echo "autoload -Uz promptinit" > .zshrc
% echo "promptinit" >> .zshrc
% echo "prompt redhat" >> .zshrc

Add a sanity alias for ls, because I hate the default ls behavior.

% echo "alias ls='ls -alph'" >> .zshrc

Alias vim so I don’t accidentally open vi when I meant to open vim.

% echo "alias vi='vim'" >> .zshrc

Reload zsh config.

% source .zshrc

Download and use my VimRC configuration. You might have to install git first for this one.

% git clone git@github.com:P-Krebs/vimrc.git ~/.vim_runtime
% sh ~/.vim_runtime/install_awesome_vimrc.sh

What’s next?

Phew. I am sure that was a lot. Hopefully that wasn’t too intimidating. I think this should leave you with some sensible defaults for hosting and basic security on your server. Of course, I could be way off base here, but if I am doing something wrong, I am sure the internet will tell me.

In the next part, we are going to actually do the interesting stuff: deploying your Rails app to your newly created server! Woo!

Moving from Heroku to Linode
>>

part one

Heroku is pretty great service. It takes all of the complexity of web infrastructure and reduces it down to a pretty simple interface. Deploying your app is as simple as pushing to a git repository and you don’t have to worry about provisioning users or databases or anything to do with the actual server. I am a firm believer that Heroku is a great choice for many people and that everyone from beginners to large companies can, and should use Heroku for their hosting. For me though, I think I am ready to break up with Heroku.

Its not you, its me (barf)

Nothing bad happened between me and Heroku. It would almost be easier if there was some catastrophic failure that pushed me from the platform. But we just want different things, Heroku and I. And I am not a fan of the people they have been hanging out with.

In 2010, Salesforce bought Heroku for $212 million in cash and the more I get to know Salesforce, the more I don’t really want their hands in my soup. And while its probably fine now, I don’t want to be forced to move and then scramble to figure out how to migrate my entire infrastructure. I want to get out of this relationship before it turns sour.

Also, this gives me a chance to expand my skill set and gives me more independence as a developer and less reliance on closed systems etc.

The Plan

So as of right now, I am going to be hosting all green field projects on Not-Heroku. On a new project at my Job-y Job, we couldn’t use Heroku since we needed a static IP, and out client barely wanted to pay us for the work, let alone pay for a Heroku Enterprise account (thanks Salesforce for that one) so I took the chance to go through the process of setting up a production Rails environment on a VPS. I went through the entire process four time all told. The first two being a total screw up that resulted in deleting the entire server and starting over. The third time was the charm, and the last run though was to get the process really squared away and to document my process, which I will know pass on to you.

I think there are going to be roughly three parts to this series. This is Part One, the context and introduction. Part Two will be setting up the Virtual Private Server and all the standard prep work. Part Three will be setting up and deploying Rails. My hope is that this will be the comprehensive series of articles that I would have wanted three weeks ago when I was figuring all this out. Hopefully it will be of some use. And if all goes according to plan, by the time this series is finished, this blog won’t be hosted on Heroku anymore.

What’s next?