This is the first part in a multi-part series about converting a 2014+ (Rushmore and M8) Street Glide to a Road Glide. I could not find anybody online that had posted a howto or any examples of doing this before which surprised me, so I went on the road to a very expensive science experiment and succeeded.

First off many people immediately discount this wondering why. The reason is simple and the reason behind lots of things: money.

I hated the low speed handling of my Ultra and started looking at swapping it for a Road Glide of similar years. The problem I ran into was greed at both independent and Harley dealerships in Dallas-Fort Worth. Long story short it would have costs me minimum $8k to swap to a Road Glide of the same year with more miles, this was not even the Ultra this was the normal or special which are worth less than an Ultra. After being told that this was how it was and they won’t be reasonable on the price I made a wager with the manager of a local dealership I could convert it for under $3k which I did. Unfortunately, the manager was fired during this endeavor so I can’t collect. Rumor on the Facebook has it was that he was skimming cash or something along those lines.

The entire project costs me about $1500 before painting the fairing (have a two-tone paint job only offered on the Ultras during one year). This was partially because I found someone trying to do the same thing that gave up and he sold me a bulk of what was needed for a very good price. The rest I sourced from some salvage bike sellers on eBay, Facebook Marketplace, and for some of the harder to find items I had to go to the dealership.

In this series of write-ups I will be posting a parts list, a teardown, and a rebuild with the Road Glide fairing. All with complete pictures and commentary showing the gotchas and what to not do, that I did, on each part.

I will update this with a link to each page once I post it. As of 1/20/2018 the bike is completely converted and I am just working on these pages explaining each step of the way.

I am writing this more of sharing an experience and as a brutal reminder to others as anything on why it is best to be prepared.

A few weeks ago on the way back from a nice weekend of riding in the Texas Hill Country one of the bikes in our group went down (wrecked). I have not pulled a faster U-turn on a bike before to rush to aid the couple involved. Neither were wearing helmets during the crash and coming up on it was a sight that might look like the aftermath of a chase scene in an action movie with parts of a bike everywhere then trails of blood leading up to a couple laying in the road severely hurt.

Backing up a bit in time about 3 years ago a good friend of mine was working for an outdoors activity supply company. During this time she had quite an obsession with safety and being prepared for the worst which was about the same time I started riding a motorcycle again after a nasty wreck I was involved in. She hounded me for a few months about making sure I carry first aid supplies and these things she called “trauma packs,” which are to help blood clotting happen faster, with me in case something happened. After quite a bit of this I finally broke down and bought a set of two of them along with an outdoors activity first aid kit which I carry in my saddlebags to this day.

Fast forwarding back to a few weeks ago the moment I had feared and hoped never would had occurred in which it was necessary to use these. As I pulled to a stop I screamed at my passenger open the right saddlebag now and grab the items in there. She grabbed my first aid kit and the trauma packs then rushed up to the scene as I was putting the kickstand down and dismounting the bike. Thankfully an ER nurse witnessed this and stopped to render aid whom was able to use the trauma packs better than any of us ever could have.

Both involved were airlifted to the nearest hospital trauma unit and will recover fully in time according to the doctors.

The aftermath showed all of us present one thing: these trauma packs and the first aid kit were used to stem a severe head wound on one of the two involved and might have saved her life. It was a very traumatic experience to all of us, and it is one that has me beyond thankful to my friend whom years before hounded me enough to become prepared for such a situation. Had she not then a young woman very well might not have made it alive to a hospital years later.

With that said I urge all of my fellow riders to be prepared for such an incident as you never know when they might occur unfortunately and sadly they can happen in a blink of an eye where seconds count. I know we hear this all the time riding from people that do not ride, but this is coming from one of your own as a warning and asking a favor in case one day that is myself or one of your loved ones needing it.

Postmortem I would say a more “trauma oriented” first aid kit would have been a better choice than the outdoors one I had chosen, but the trauma packs were the star of the day and should be standard issue for every riding group in case of an accident as they really do work. Another note would be to know how to use them and do not wait for an accident to follow instructions. They are easy to use, but require knowledge to use in a moment’s notice and had a nurse not been present using them would have taken much longer.

Links to my recommendations on Amazon:
Trauma Pack
First Aid Kit


On Tuesday Congress gave privacy a huge middle fingered salute by revoking the FCC’s rules preventing ISPs from selling your browsing history to 3rd parties. Many people, mostly in the conservative camp, applaud this as another manner of income for these “overly regulated” ISPs. Myself as a pretty libertarian thinking person usually am against regulation by the government, but am becoming more and more concerned over the destruction of regulations that were put in place to prevent wholehearted abuse by entities and corporations. This action is one of those that prevented ISPs from sharing raw info on what you were looking at with advertisers or whomever would like to purchase it.

To not get into the typical political debate on politics here I am going to provide a solution that might (key word there) help you avoid the effects of this rule being removed.

Potential Solution for Consumers

The might work solution I am giving out is to use a VPN for your internet connection.

Please note there will be some pretty big gotchas on this which I will discuss later.


When discussing this in person many people look at me like yeah dude you are talking technogeek, but I use a VPN to work from home for work so that won’t help. So allow me to explain what exactly this is like a normal person instead of the normal technophile I am.

Virtual Private Networks (or VPN for short) is essentially what the name says, it is a virtual network (as in not a physical one such as the wifi or wired one in your house) that is private. So anybody that can read and think critically could probably figure that one out, so what does this it really mean?

VPNs in this case are essentially an encrypted tunnel from your device (computer, tablet, phone, router, etc.) to another server which then acts as your exit to the wider internet. Think of your data as the water coming from a water hose: the water comes from the hose bib (yep that’s what those things are called) on the side of your house into a hose, then it flows along that hose to a certain destination like your garden or lawn, and after traveling along that in the hose path the water comes out into its destination. If viewed from outside you would see the water flowing out of the hose, but not have any view of the data till that point.

Using a VPN would do the same thing in that your ISP would just see the VPN tunnel (or the hose in my analogy above) in their network and your data (the water above) is seen by the outside world as coming from the VPN tunnel instead of your device. So in theory the ISP doesn’t see your traffic and only a VPN tunnel through their network and everybody out in the wider world sees you as connecting from the endpoint of the VPN tunnel.


In this case there will be a few gotchas on here and some are biggies.

Gotcha 1

Netflix will not work.

WAIT!!! WHAT?!?!?!

Yes let that sink in for a few. Netflix by default blocks VPN connections. If I was to venture to guess I would say this is part of contracts they have with studios so that their content cannot be viewed outside of certain countries. I would also venture to guess this is a money grab and trying to force you to buy more DVDs (yeah I almost forgot what those were too).

While there are ways around this, i.e. google netflix VPN bypass, which I won’t go into here.

Gotcha 2

ISPs adapting.

This is a very real possibility here since they are arguably the reasons that this bill was passed by Congress through lobbying. They could very well start introducing terms in their usage terms that prevent the use of a VPN to maximize their revenue through this potential law.

You might say well I just googled it and saw some VPNs claim they are “stealthy” and “undetectable.” Well move on to gotcha number 3.

Gotcha 3

3 words: Deep Packet Inspection

This was all the rage a few years ago when it was being used on corporate networks. It is essentially a technology that would allow an ISP to detect you are using a VPN. A lot of the vendors of this technology don’t go into details on what all their products can do which is probably because it does more than they want the public to know about. I would venture to guess that some of the higher end solutions are able to tap into a full backbone connection and detect in real time these “stealthy” VPN connections.

Note that I am just a software engineer and not a network or hardware guy so this is just an educated guess based on what I see done in technologies today.

Gotcha 4

This VPN needs to be active on each device you own or through the common pipe to the internet (usually a router).

Most homes have computers, phones, tablets, set top boxes, etc. in them all connecting to the internet. In order to make everything secure you would need to run a VPN on each device which may not be possible since some of these devices do not allow using VPNs.

The alternative is to have your router connect to the VPN and keep it active all the time. Again many routers will not do this and if you use the modem/router combo given to you by your ISP chances are almost absolutely that you won’t be able to do this.

I recommend a 3rd party router using some form of 3rd party firmware that supports this. A great example of this would  be any router that supports dd-wrt. You can find more about this project on their page here.

Gotcha 5

This one is following in the profiting from this potential law side of things. Since there are no more barriers to sharing your browser history by ISPs what if the VPN provider decided to do that in place of the ISP for some extra cash?

Well the wording of the law allows this and has no language to block it. So legally they could do this and I am not much of a gambling man, but I will bet that some will do this. I would also bet that due to this unless another law prevents it is a new influx of VPN providers that promise they will not sell  your information.

Services Available

Ok now after I have presented all the theory and gotchas, the reader might be curious now as to how to get this service and set it up.

I personally use TorGuard for my systems and can attest that it works very well with extremely fast connections. This is great and runs about $10/month for the service which is very reasonable.

While I cannot recommend any others, in my researching this post I came across a list with plenty of pros and cons of competitors. I will definitely be watching these and my current one as this potential law either goes into effect or gets vetoed (which looks very unlikely).

Without further ado here is that list:

It has been a while since this site has been up, but it’s officially up again and I plan to be posting a lot more in the very near future.

With the new onset of the blog once again I am going to be posting about the next major migration in my life: to build a barndominium and finally get out of the city a bit.

Barndo-what?!?!?! A barndominium is the combination of the words barn and condominium. Now before you go wondering what kind of backwater place I come from to go living in a barn let me clarify saying it’s not really a barn. Technically speaking it’s a steel framed structure that’s been engineered and designed as a “kit” for easy construction. Think of how you have those Lego kits with instructions to build some cool looking piece of art, well this thing is the same thing except with steel beams bolted together that form the skeleton of a house.

Many people would wonder why I would do such a crazy thing, the simple answer is that after 14 years of living in the Dallas-Fort Worth metroplex I am tired of city life and would like to return to something of my roots out of the hustle and bustle of the city. The official plan is to get my current house in Ft Worth prepared for the market and sell it to purchase some land northwest of Ft Worth for the building of this new experiment.

Why a barndominium? That’s what I first asked myself when I came across these things and they just stuck out in my mind a LOT. Well the huge benefit to them is that unlike a traditional framed house there is no need for internal load bearing walls as the outside steel supports the whole of the structure. This means essentially you can choose however you would like to setup the internals of the house (i.e. the floor plan). So instead of needing to build rooms around load bearing walls inside which determine the overall appearance of the floor plan in a barndominium you essentially choose how you want to set the floorplan yourself. The other very large benefit of one is that it’s trivial to have an attached workshop or garage.

I am constantly working on my motorcycle, car, or something else which requires a good amount of room. On top of this with all of my activities such as camping, bicycling, kayaking, mechanics, etc. I need a LOT of space and a 2 car garage is showing it’s limits with a kayak hanging on the ceiling, a motorcycle taking up a spot, and a car taking the other spot. This is along with my huge amount of tools and power tools I have in there.

The next phase of this is to start designing a floor plan and speaking to contractors that specialize in these to price it out.

I want to apologize for the delay on this, I wrote 95% of this article months ago then life got busy. So I am back at it now with some free time finishing it and hitting on a few new areas of knowledge to share. This is a very simple subject so this won’t be long at all.

The Code

In the same fashion as all of my coding tweets I will first give you all of the code for the entry (just make sure you insert your own values into it before using it). So here is the code:


As with most Node projects there are some modules that need to be installed. So run the following in the directory of your project:

 Code Analysis

I will be skipping the part of the code where I open up the Twitter stream and the db connection. If you wish to know how these work please see my earlier entry here.

This small block of code defines the schema for our documents that we are inserting into Mongo. It will have the tweet ID (which comes from Twitter and is unique for each tweet and also very large so we must take it as a string), the date the tweet was sent at, the text of the tweet, and the username of the person who tweeted. It’s pretty simple and self exclamatory.

This will just create an instance of the model using mongoose. Simple as that. On the name passed in as the first argument make sure you remember this as that is now your object type that you will instantiate. Thank you JavaScript for making this kind of crazy on us that don’t use you often.

Note this first line where it instantiates the object. The name is the same name you passed into your model function previously, so don’t mistype this as it will throw some rather interesting errors on you.

Next is the easy part where we are just taking out the pieces of the tweet and putting them into our model. Nothing fancy here, but just notice that our tweet_id is a string not any form of integer.

All this does is save the model to Mongo. Very simple and straight forward.

Lastly just let this run for a few hours and then check your collection to see what has been inserted. I will continue on with a more complicated example later that does some more with the Twitter API.

The Code

Here is a simple example of how to create a simple Node.js client that reads tweets off of a user stream and inserts them as JSON into a MongoDB collection.

To make it simple for people that don’t need an explanation of the code here is the code in it’s entirety (just be aware there are parts you need to change for your own data):

Now if you need to start from the get go start reading here again as I will explain exactly what this thing does. If you are new to Node.js then please read my article here before continuing as it might help with the above syntax and understanding Node.js better.

To run this simply put it into a text file and run it by running: node <filename>.js


Before running this there are a few commands you should run first. These are to install the needed modules for this code, they are pretty self-explanatory. They are:

You will also need to setup a Twitter account with an application registered for this. If you are unsure of how to do this read this document. It’s a great intro that I found when doing mine since it is not really that intuitive so maybe Twitter will make this easier to find in the future.

Notes on MongoDB Hosting

If you don’t want to host your own MongoDB instance for whatever reason I will give a huge recommendation on a hosting provider that I use. It is called MongoLab and these guys are awesome. They give you a free instance for up to 512MB of stored data and in using them for a few months they are very reliable with no downtimes so far. Plus their editor works well for seeing JSON data as will be inserted by this program.

The Code Broken Down and Dissected

Now let’s do a break down on the code itself and explain what each section does.

This is pretty easy to look at and see what it does. It creates an instance of MongoClient from the just installed module and tries to connect with it. Makes sure you change my URL string to whatever it is for yours. If it fails then it will throw an error for you to see on the command line. Next it tells the Mongo module which collection to use (if you are new to Mongo this is a rough equivalent to a database from the SQL world). You will get this from your Mongo instance.

This is the part where we connect to Twitter. You will need to log into Twitter and find the application you registered earlier and on that page is a list of keys you will use above. Just match up the labels with the keys in the code. Simple as pie (well sometimes pie isn’t that easy).

The first line of this initializes a user stream from a Twitter account and registers a callback function. The second line sets up an event-handler for when data is received from the Twitter stream and registers a callback function passing the data from the tweet into this function.

The first line of this code inserts the data (which comes from Twitter as JSON) into your Mongo collection and registers a callback function. The code inside of that callback function simply counts the number of records in the collection and spits it out to the console. The nesting of multiple callbacks can get confusing, but after writing a few it becomes pretty easy to see and if you use a good editor it will show the indentation and syntax coloring for you. I use vim as an editor, but there are plenty more out there.


This was a very simple example of how to insert tweets into a MongoDB collection. Just be warned if you run this for any extended amount of time it will take up a LOT of space. For example I inserted 124 tweets and that takes up about 360KB. So just scale that and you can see it will take up a ton of space if you let it go for an extended amount of time. I recommend breaking down the JSON and inserting only what you need if you want to run this for any significant amount of time. I will probably touch on a way to do this in a future posting once I get a design created and some code created for it.

Hello World

So for the past few years there has been this crazy buzz word for a thing called Node or Node.js. As many a recruiter that has called me about jobs with it has asked what exactly is it. The simple answer is server-side JavaScript. A more complex answer is an event-based framework that runs JavaScript code using Google’s V8 JavaScript Engine. In this I will be referring to Node.js as Node or node, that’s the common term for it used around.

Now if you want to try this thing out let’s get into the process for doing that. For the purpose of simplicity I am going to assume you have a *NIX machine around such as Linux, Mac OS X, BSD, etc.

First you want to install node for your system. Go here and download the installer for your system and install it.

Next open up your favorite text editor (I prefer vim myself, but nano or pico work just as well for this exercise). Copy and paste the following code into it:

Save the file as hello.js somewhere. Do a cd to get into the directory you saved the file in.

Now run the following command: node hello.js
You will see Hello World printed on your console. If not double check you did this correctly.

That’s how to run a very simple node application. Continue on for a short demo on how to setup a very simple HTTP server along with some required knowledge for it.

Misc. Required JavaScript Knowledge

One thing that most people (like myself) that did JavaScript way back in the day never use is the event-based model in it or the functional programming aspect of it.

A huge part of programming node is a nice concept that comes from some functional programming languages. This is the ability to pass functions around as parameters to other functions. Ok yes this sounds confusing so here is a simple example:

If you run that it will output “4” to the console. It simply is creating what is called an anonymous function (think a function with no name) and passing that as a parameter just like you would the integer parameter before it. Since it is a function it can be called just like normal inside the called function. This is very useful when you want some function to perform a callback into some function you have written to perform an operation.

With that being said another important concept is that Node uses what is called an event-based model for calling functions. What this means is that it will call a function based on some event happening whether that is a connection being established to a database, a HTTP request being made to the application, closing of a file handle, etc. Note that all of this happens asynchronously meaning that execution continues and these events are triggered at any time or maybe not at all. This is where the callbacks I just talked about come in handy.

Consider the following code:

This very simple snippet creates an instance of the filesystem module from the Node standard library (it is called fs in there). Next it will try and read all contents of test.txt. If the file does not exist you will notice it throws an error about file not existing or if it does exist it will print the contents of that file. You try running it before creating a test.txt file just to see the error and how it will react.

A great way to see how it works as an event is create your test.txt file and copy about 5MB of data into it. This will take a few seconds to read the data so you will see it waiting after printing out the ‘Reading the data.’ string then when it finishes it will write out the contents of the file to the console.

Simple HTTP Server

Now that a few fundamentals are out of the way I am going to give an example of something practical, a web server.

For those of you that want to just blast through this I will just give you the code here and continue explanations later:

This code will first create an instance of the http module from the Node standard library. Next it creates the server with a very simple callback that just writes out the basic header (200 is the OK code which means no errors) and a simple text string.

If you run this simply go to http://localhost:8085 and see what it does.


That’s it for this installment. Hopefully this makes it a little easier to get the hang of Node.js since it is vastly different than most other programming paradigms out there nowadays. If you want to tinker around with more of the standard libraries the documentation is located here.

Next time I will try and bring in some more interesting and advanced topics like integrating with MongoDB and Twitter.

I am finally getting my life under control after a motorcycle wreck that turned my world upside down. So with the new year comes some updates and me coming back to blogging with a restart of this site.

Sorry for the lack of content, a disk failure a while back killed this thing in me realizing my backup strategy did not work the hard way. So kids remember always test your backup strategy because updates can and WILL break it at some point. I hadn’t tested mine in over a year and well it killed my MySQL databases when a RAID controller failed.