Deploying a Rails App to a Linux Server

Rubymine

Why?

Recently, I’ve deployed different Rails apps to three different Linux servers from scratch.  All three were virtual servers.  Two were Ubuntu 12.04 servers using Apache.  The most recent is a Debian server (version 6) running Nginx.  Each server took progressively less time to launch, but with each one, I found myself grabbing bits and pieces from Stackoverflow.com, and other destinations in order to successfully complete my effort.  So, I’ve decided to take the notes I made on this most recent deploy and post them here so that perhaps others will be able to benefit from my trials.

The Host

For all of my basic websites and e-mail, as well as a couple of old Rails applications, I’ve used Dreamhost shared servers.  For websites, this has worked out well, being a very cost effective means of managing websites.  There has also been little downtime and few overall issues.  However, with Rails, there is a major issue.  While I understand the desire to avoid being on the bleeding edge with technology, Dreamhost has remained rather rooted in the distant past.  I have long since ceased doing updates to my first application, the one I use to manage our parish bookstore, simply because I can’t move to any current gems or technologies.

However, Dreamhost has a relatively low cost Virtual Private Server option, where you get a virtual server running Debian.  There are various levels of configuration for the server, from situations where Dreamhost manages most of the major configuration options (web server, users, etc.) to where you manage it all.  In the few days there are only two negatives with Dreamhost, and neither of them represent, in my judgement, much of a problem.  The first is that you have to use Debian – there is no other option.  The only other company I have to compare this to is Rackspace, where you have a wide variety of Linux flavors (and Windows for an appropriately larger fee).  With Rackspace, however, there is a bit of an increased cost that comes with the flexibility.  The other issue is around the chat support.  With Rackspace, I tend to have a very capable technical person in the chat app in under a minute.  With Dreamhost it has been 10 -15 minutes, and I’ll admit the quality of the technician isn’t as high.  Nothing dramatic, but there is a level of you get what you pay for.

While discussing hosts, I should mention my choice of code repository.  I have opted to use bitbucket.org, an Atlassian product. Why?  Well, free is the good part, but I also get to have the repository remain private.  The only limitation I have is that I can’t move past 5 users.  Well, since these are apps that I’m building myself, I doubt this will be an issue.  You can, of course, use Github, but you have to pay to be private.  I’m a huge Open Source fan, but I don’t necessarily want everything that I’m working on to simply be out there.  At some point in the future, I may take some of these apps and make them publicly available, but I like being able to start with them privately.

Okay, Let’s Get Down to Work

So, I provisioned the Dreamhost VPS and since I get a week free, I opted to max out the memory available for the server.  This proves to be beneficial as some of the software installation processes get very memory intensive.  Installing the Pasenger-nginx module will complain if you have less than 1024 Mb of RAM, and if your server doesn’t have that much, it will reboot in the middle of the installation process.  I opted to deselect every “Dreamhost Managed” option, perhaps even when I didn’t need to, but I figured it would be safest to be able to do my own installations.  This included selecting no web server to begin with.

For most of the installation process, I followed the instructions at Digital Ocean.  Yes, these instructions are for Ubuntu, but that is a Debian variant after all, so I didn’t run into any trouble.  The only thing I did differently to begin with was to run aptitude update and aptitude dist-upgrade in order to ensure that everything I needed was available.  I also opted to follow the RVM installation instructions from the RVM website for multiuser.  I have, over time, found various sets of instructions on RVM installation, and have always found it best to simply go with the authors.

Everything else installed as indicated (I did opt for Ruby version 2 instead of 1.9.3).

Nginx

I opted for Nginx in all of this for a couple of reasons.  The first is that I really didn’t need all of the capabilities of Apache to run just Rails applications.  Down the road, I do expect to use Solr, but I believe that the installation will build its own version of Apache.  Nginx is also supposed to keep a relatively small memory footprint, which is important as I’m paying for memory, and it is supposed to be faster.  I haven’t run my application on it long enough to decide, but time will tell.

When you are done running the above instructions, it is likely that Nginx won’t work.  🙂 Surprise.

I believe that the problem was I had residual Dreamhost Nginx pieces on my server, most notably the nginx init script from the /etc/init.d directory.  For those who are very adept at fixing Linux scripts, fixing the one that is present isn’t probably very difficult.  For my part, though, I just grabbed the script present on this page about setting up Debian and Rails.  The script is not entirely robust, as I find myself needing to manually kill the nginx processes if I need to restart them, but that isn’t much trouble and I’ll likely fix this later.  Outside of making the script executable and ensuring that it runs at startup, I mostly ignored this page.  A lot of it is because the default Debian install from Dreamhost has much of it taken care of.  The other issue has to do with RVM.  I’ve long since learned the advantage of using RVM, so manually installing Ruby seems like a bad idea.  There are some other interesting looking parts on that page, so I suspect it is more useful in general than I took advantage of.

After making these changes, Nginx just… didn’t work.  The problem was with the log files for Nginx which were all owned by root.  Seems like a bad idea.  I modified /opt/nginx/conf/nginx.conf to run as www-data then changed the log file ownership appropriately.  This is user is very much unprivileged in the system, and so seems like a good choice to run nginx as (Apache defaults to this to, so it should seem familiar to people who have worked with Apache).

MySQL

MySql installation was amazingly painless. I’ve had problems with it before, but I followed the instructions from cyberciti.biz, and all was happy.

Deployment and Rails Tidbits

A lot of what I’m going to say here will likely result in a bunch of face palming by more talented developers than I, but since I’ve not done a lot of new deploys in the past, I still trip over amazingly trivial things, so I figure (hope) I’m not alone in this.

The first bit is to remember to generate keys on both your development machine and on the server and provide the public keys to bitbucket, so you can download the source code during the deploy process.  BTW, I use Capistrano for deploying my rails apps, as I find it easier for doing updates.  Frankly for an initial install, I don’t think it helps too much, but down the road you’ll be happy if you use it.

When you create the keys on your server, make sure you do not use a pass phrase.  Although the server will ask for your passphrase during the deployment process, Capistrano doesn’t seem to actually transmit it, so your deploy will fail.

Also, don’t forget to run cap (stage) deploy:setup.  I always forget to do that on first install, then watch it fail as the target directories don’t exist.  Before you do the deploy, however, you should change the /var/www directory to be owned (chown www-data:www-data) and writable (chmod g+w) by the www-data group.  I should have mentioned that my deployment server user is a member of www-data.  This makes it easier to make changes during the installation process.   Turns out that giving global www-data too many privileges is not wise.  Plan on running the rails application under a dedicated service account and give that account permissions to the appropriate folders for running the application (typically you just need public, tmp and log directories and their subs, as well as any custom directories you need to write to).  The installation can run using your user account.

Two other issues I ran into had to do with bundler and with a javascript runtime.  I ran deploy and received an error that there was no bundler.  I performed a gem install bundler on the server, but that didn’t help.  I then discovered that I was missing a require ‘rvm/capistrano’ at the top of my deploy file, which is necessary for doing capistrano deploys in an rvm environment.

The javascript runtime is best dealt with by installing node.js which you can do by following the instructions here.  You can go get a cup of coffee and a doughnut while this installer is running.  It takes a while.

Another problem was with using Zurb foundation.  Since the foundation css file doesn’t actually exist, the application will not run when you access it from your web browser.  So, it is necessary to run a bundle exec rake assets:precompile at the end of your installation.  Apparently you will also need the compass gem in your gem file ahead of the foundation gem.

Finally, if you are running multiple stages on your server (I have a staging, um, stage, for testing new stuff out with users) you want to make sure the RAILS_ENV variable is properly set.  You can follow the instructions at the mod rails site for doing this.

The Future of Cloud

IMG 1487

Every time a significant shift in information technology appears, there is a period of extremes in response to it, where one minute everyone is really excited about the option and the next, an extreme pessimism sets in.  Cloud computing appears to be going through the same thing.  In a post from this morning’s CIO Journal, Salesforce.com’s declining profit forecast is used as a jumping off point to discuss some of the conversations at the MIT Sloan CIO Symposium, where the winner of this year’s CIO Leadership Award expressed some hesitancy around Cloud computing.

Now, I’m operating off of only a couple of quotes and paraphrases, so it is always hard to gauge the actual context of the remarks, but a superficial read of the blog posting might lead those pondering a move to the Cloud to doubt the wisdom of doing so.  However, this is really just a brief foray into pessimism.  In fact, a closer read of the piece merely underscores the sort of analysis that frankly everyone should go through before any technology decision is made.  My only real disagreement stems from the fact that either Mr. Blanchette is being guilty of a bit of hyperbole, there is more to what he really said, or he is somewhat mistaken, when he is quoted as saying that value propositions from Cloud vendors must be drastically better than those of on-premise vendors.

Frequently, business success comes at the margin, so the only reason I could see for setting the bar as high as “drastically better” would be if he has his eye on other issues.  For instance, in a large organization with a large technology investment, movement into the Cloud can mean a significant, and negative disruption to the organization.  Movement to the Cloud often means workforce realignment at some level.  In fact, I could envision a situation where movement to the Cloud could mean in increase in internal labor in order to manage the relationship.  This only seems plausible where the Cloud offering represents a new system, but I can envision such a situation.  At the same time the architectural complexity of the environment has significantly increased, and in most circumstances the level of control would decline, at least a little bit.

However, as I’ve posted before, the control issue is less real than people imagine.  I think the same holds true for the SLA’s Mr. Blanchette refers to.  Perhaps his company is different, but I have watched SLA’s (which is most companies mean Service Level Acceptance – that is the level the customer is forced to accept – as opposed to an actual agreement) gamed by a variety of techniques, such that outages and down time is harder to quantify.  I have one Cloud vendor where I can get a pretty talented support person on line within seconds for my apps and servers.  Even when I worked for a large company, that was absolutely never the case unless I was a personal friend of the individual I needed to talk to.  Frequently, the problems that the IT and business people further down in the trenches experience with their infrastructure and app support teams, are shielded from the senior management and CIO until they become really severe.  Even a value proposition that appears only marginally better on paper may reflect a significantly better reality.  The question is, how would you know?  The only real way, of course, is to ensure that you really understand your environment and the way it works for people.  How to do that, of course, is the subject of a great number of management classes and programs, so I’ll leave that for another time.

One thing I did like from the summit was the understanding on the part of the CIO’s that people obtain Cloud services on their own, independent of IT, because most corporate IT organizations are not responsive enough.  I just hope that this wasn’t a recent revelation.  The inability of most IT organizations to be responsive is very old news.  It is why, back in the early 90’s that Filemaker and 4D (and later Access and Excel) became the bane of IT’s existence.  Since IT couldn’t provide any sort of rapid application delivery to address emerging needs, people found tools that allowed them to go it on their own.  Of course, these tools were often used to create poorly engineered databases, but they worked, and business units grew to depend on them.  At some point, the original designer left, or they needed the application to support more people, and they would hand over the dysfunctional system to IT to manage.  The situation hasn’t changed.  We live in a world where some large organizations didn’t finish moving to Windows XP until over year after it left mainstream support and a mere 4 years before End of Life.  That is the reality of corporate IT.  I have only seen glimmers of organizational transformation that leads me to think that this might change at some point.  Until it does, the only thing that IT can really do is to take the lead into the Cloud, and be supportive of businesses seeking solutions there.  I have one user who needs a moderately expensive tool, that is available in the Cloud.  I could tell her to wait at least a year, until I have bandwidth to attempt to bring such a solution in house (assuming I can find the dollars necessary to support the infrastructure costs), or I can let her look at systems as a customer, with my presence only as a second opinion and to verify that these systems will fit into our overall strategy.  The former means she is operating with a significant handicap for at least a year, the latter that we are able to achieve more in less time.  Which do you think is the best approach?

Cloudy with a Chance of Leverage

Today’s CIO journal published a piece doing a compare and contrast between the big 3 of cloud computing. There were several things of note, but one particular line caught my attention: “Amazon.com appears most willing to enter into customized agreements with its larger customers…” Amazon is, from the sounds of the article, most willing to create some level of customization in their offering. However, only for larger customers. I get this. Losing one small customer is unlikely to have a significant impact on bottom line. Losing one big customer would be a different story.

In many areas, to offset this large customer gets the perks approach, smaller customers can band together to generate the leverage that they cannot enjoy individually. It occurs to me that this could be possible in the cloud. That is, some form of consortium could form to negotiate pricing, SLA, and other topics of interest. Of course, small users could opt for other cloud providers that have a better offering in terms of price and control. However, for many there are solid reasons behind wanting to stick with a major player, and a consortium would allow greater opportunity.

Rails Integration Testing with Omniauth and CanCan

Implementing CanCan and Omniauth with multi-authorization support and integration testing proved to me to be somewhat challenging.  For the most part, I think the challenges reflect my lack of sophistication with Rails development, but there were a reasonable number of questions out on the web that reflected other people with similar challenges, so I thought I’d document what I’ve learned, and how I was able to get Capybara working with it all.

Test Driven Development and COTS

COTS is traditionally taken to mean Commercial Off The Shelf.  However, I’ve also heard it referred to Configurable Off The Shelf.  This latter term more accurately describes a large class of systems implemented in business environments today.  The package can largely be deployed as is, but typically has a significant number of configuration options to allow it to be more closely tailored to its target environment.

When preparing to deploy one such system earlier this year, a colleauge and I thought it would be interesting to leverage Microsoft’s Test Manager(MTM) to create and record test results covering the final testing of the product.  Both of us being new to automated testing and things of that nature, we didn’t foresee the challenges we would face.  The largest challenge was the lack of detailed reporting available at the end to use as documentation for compliance purposes.  In addition, the system is not very supportive of stopping and test mid stream and continuing later.  If you stop a test you frequently need to declare it a failure, even if the issue you were facing isn’t strictly speaking a failure.  Of course, this sort of testing is not really what Test Manager was meant for (or any other code testing tool).  The end result was that we decided to that we wouldn’t use MTM for similar activities in the future.

At the same time, I was working on another project that was all about code development, but where the significant code base had no tests written for it before.  Typically, attempting to run tests against such code is likely to result in a bunch of errors, and very little, if any value.  However, I was playing QA for an offsite developer, and I decided that it would be useful to have some predefined tests for me to run whenever the develope did a code check in.  I turned out to be right.  I didn’t have to rethink what steps to run through every time.  I could also refine the test list, and hone it so that I was hitting all of the important parts.

As I was going through this process, it occurred to me that testing tools, and Test Driven Development (TDD) techniques could actually be of benefit during a COTS implementation process.  Since tests in a TDD environment actually serve as a form of requirement, you can write up your tests either as a copy of an existing requirements document, or have the test list serve as the requirements document.  Of course, the important point here would be to make the test titles suitably descriptive (as they frequently are in a TDD environment).

As configuration of the system progresses, you run all of the tests (skipping ones that you know haven’t been worked on yet), until you have all of your tests completed.  This then tells you that you are ready to enter into the formal testing phase for compliance systems, or that you are ready to release.  This also facilitates Agile implementation of a COTS system.  As you progress through the various Sprints, User Stories turn into Test Cases, and as those cases test properly, you are done with a Sprint.

I haven’t actually attempted this yet, but it seems reasonable, so I will definitely attempt it.

Old Habits

I have written recently about my discovery of testing when developing code, and of test driven development.  At the time, I was amazed at how well it revealed unused code as well as bugs in the code that is being used.  Unfortunately, I have also discovered that testing is a discipline.  Like any other discipline, it is one that you have to work at establishing, and you need to work to maintain it.

I came to a spot while working on the LiMSpec project where I was trying to develop some unique code, and simply stopped writing new tests, and completely avoided running the existing tests while I was busily “hacking.”  I had reverted to my old bad habits established from 20 years of coding in everything from Assembly Language on up the complexity tree.  Code, code, code, then maybe document, then maybe design.  Test only when everything is done.

Surprisingly… okay, not surprising at all, I had broken various things in the process.  The only way I knew this was when I ran tests to generate a coverage report.  So I’ve now gone back and fixed the broken parts, and have extended the existing test suite.  There is still more work to be done.  I’d like to get close to 100% coverage, but am only at about 85% right now (I think it might actually be quite a bit higher as RCov seems to have missed many places where the code has been run in a test but is not being captured), and will seek to get there during the second release.  In the meantime, I’m going to try to force myself to move much more toward the Test Driven Development approach – where the tests are written first, then the code written to pass the tests.

The Advantages of Being Too Big to Fail

The Wall Street Journal’s CIO Report tells us that SAP has finally accepted that their software has usability problems.  While it is laudable that they have finally realized the problem, and equally laudable that they have decided to fix the problem, or at least attempt to, the fact that they have taken so long is something that many IT professional should think about.

I have blogged before about the importance of considering usability in systems as something important.  Apple figured that out a long time ago, and they have become very successful.  Many other companies weren’t so concerned, and have faded away, or are in the process of doing so.  The advantage that SAP has is that they are literally too big to fail.  Their installed base is so large that they can afford to wait until major customers start deploying non-SAP software in their environment to fix a fundamental flaw in their products.

The question should be whether your company is too big to fail.  This question should be asked even if, especially if, your primary product is not software.  As I noted earlier, many corporate IT groups are primarily concerned with deploying a solution, not deploying a usable solution that the customers need.  What ends up happening is that IT fails to deliver additional value to the business.  Large numbers of users don’t adopt the solution and much of the purported advantage of the solution is never realized.  In addition, IT develops a bad reputation and future projects slow down or fail because of it.

In the 21st century, many employees have literally grown up with very usable technology (Apple, Facebook, Amazon, etc.).  They are very productive when using these technologies and expect the same usability at the office.  I don’t know how many times in the last few years I’ve heard someone say “I’d just like it to work as well as Amazon.”  When the tools IT delivers are not that usable, they will find other ways to succeed.  Typically, this is not a victory for either side.  The user solution may be faster for them, but probably not as fast as a well designed solution, and likely their failure to use the deployed solution means that other value propositions are not being realized (such as data mining opportunities and the like).

What is sad about this story isn’t so much that SAP is a bit slow in figuring things out, but rather what it means regarding the numerous companies that use their software.  How much inefficiency and waste exist in their customer’s organizations because the product is so hard to use?  How much money wasted on excessive training classes for new users, how many miniature IT projects exist to provide workarounds?  How many extra employees are required to manage the system vs. produce and sell product?  I wonder how many of those organizations are too big to fail.

A fuzzy thing happened to me on my way to the cloud

If you’re at all involved with IT, you know that every other article you see these days seems to have something to do with cloud computing.  Increasingly, these discussions are turning to the world of science.  However, there is one group within this world that has remained fairly resistant to the cloud, and that is the FDA regulated group.  Primarily, my familiarity is with the pharmaceutical and device industry, and that is, of course, where I see the most resistance.  Typically, there are two major concerns voiced, one of control, and the other of security.  I would like to challenge both of these concerns by questioning the underlying assumptions.


The first concern, that of control, is usually voiced as a desire to “know” who can be talked to if something goes wrong.  That is, if the data is internal to the organization, you can cause the people charged with safeguarding the data endless amounts of grief if something happens to it.  Perhaps (and I think this is the underlying assumption) you could even get them fired if something serious happens.  If the people responsible for data safety are external (at, say, Amazon or Google), then you don’t have that sort of power.  There are two problems with this perspective.  The first has to do with motivation.  Assuming that the people doing the job of protecting your data do so out of fear of losing their jobs (which they probably don’t), they are just as likely to lose their jobs for failure to perform at one company as at another.  The owner of the data may have less direct impact on that, but realistically, the risk to the data guardian doesn’t really change.  The bigger problem is the “so what?” problem.  So what if you can get someone fired?  Does that get your data back?  Does it restore your access if it has gone down?  Of course it doesn’t.


The real question needs to be what quality level you can expect from internal vs. external.  By quality, I mean not only reliability and safety, but also cost.  Let’s be honest, cloud companies typically have a much higher quality of infrastructure available for their data center than do any individual business whose primary job is not providing a data center.  If you doubt that, go tour the data center in your company.  In this day and age, even small companies can and do have robust data centers.  However, compare what you have to what Google has.  With a cloud solution, you can access such elaborate technology for a much lower cost than if you were to attempt to do the same yourself.  While cloud outages are hardly unknown, are outages involving your internal infrastructure unknown?  If you think about it, such outages and service interruptions are probably not that uncommon in your environment.  They may not seem as big, as they don’t impact hundreds of thousands of users, but they are likely no less significant to your data and access to it.


In terms of talent available to you, either in a support or perhaps even customization role, you may find that the talent pool available from cloud vendors may be more substantial than what you are able to hire within your company.  This may make your data not only more secure, but if you are having things customized or enhanced, you may find yourself more satisfied with the end results.


The other concern is one of security.  Typically, this is the fear of the “open system”, by FDA definition (21 CFR 11.3(9)) a system where the access control is not by the same persons responsible for the data itself.  Two points need to be made here.  The first is that this regulation was written back in the days when there was the internet (still a relatively new phenomenon) and then there were the computers in your company, and the two were generally not connected.  Now, when everyone’s internal system is attached to the internet, you may not have nearly as much control over access to your system as you think you might.  In other words, you may be safer treating your system as an open system regardless of whether you own the servers or they are located in Googleville.  Certainly, if you have employees accessing your data from their homes, you will employ all manner of additional security, and that has become relatively common.

The second point is that the additional requirements for open systems are that you have a means of encrypting data.  That is all.  Really.  In fact, here’s the entire regulation:  “Persons who use open systems to
create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, as appropriate, the confidentiality of electronic records from the point of their creation to the point of their receipt. Such procedures and controls shall include those identified in § 11.10, as appropriate, and additional measures such as document encryption and use of appropriate digital signature standards to ensure, as necessary under the circumstances, record authenticity, integrity, and confidentiality.”  That is it.  Today, encryption technology for transmitting data is as common as that https you’ll occasionally see in the URL bar of your browser.  In other words, it isn’t much of a challenge.  Otherwise, the same set of rules as for a closed system apply.  Therefore, you’ll need to take the same care. You’ll want to audit and evaluate the systems in use by the cloud company, just as you would your internal systems.  But there is really nothing additional you’ll need to do.


Now that I’ve said all this, I need to point out that contract terms need to be ironed out.  Sometimes large cloud companies have less than ideal guarantees around up time and the like.  You’ll also want to be sure that you are comfortable with what happens if the cloud company ceases operations, or you decide to not use them anymore.  Where does your data go, and how easy is it for you to get it back?  This can be easily overlooked and people can be burned by bad decisions in this area.


What I’m trying to get at with all of this is that we need to clear in the thought processes behind deciding on the cloud.  I think to date, many of us have been driven by fear, and fear tends to lead to fuzzy thinking.  That won’t get you to where you need to be.

I Have Seen the Light

At one point in an Orthodox Divine Liturgy, a hymn is chanted which starts with the phrase, “We have seen the true light…”  While not of similar religious significance, occasionally programmers can have epiphanies that lead us to at least have a similar feeling.

For me, the true light I have finally seen is around testing and test driven development.  While I’ve always supported testing – at least the sort of ad hoc testing most developers do while coding, and maybe a more thorough approach along the way by QA people, coupled with UAT at the end of the project, the notion of automated testing as you code has really not appealed to me.  I didn’t get the point.  Granted, I hadn’t really looked into how it was used, or why it was used – even though Rails builds a nice testing framework for you.

On the project I am working on that started out in appFlower, I decided that I needed to re-evaluate my traditional behaviors.  It began with using a deployment tool for the app, which downloads the latest changeset from a version control system and deploys that to production.  The tool also allows you to quickly and temporarily bring the site down while leaving a nice maintenance message.  This will make deploying changes to production much cleaner than my former brute force approach of just copying everything up to the server, and adjusting a couple of the files so that it was running as production.

The latest re-evaluation was around testing.  I researched why and how people were using Rails tests, and was intrigued after reading some online material, and watching this video:

The presenter introduced the notion of “Technical Debt”, basically legacy design and code (frequently dead code in my experience) that requires anyone expanding the system to continue using the old system architecture, simply because it is neither well documented, nor would you have a means of truly knowing if you broke something when you alter the architecture.  I have lived, and am currently suffering with, such a system.  A great amount of effort is spent coding around limitations, because touching the core code might have significant unintended, and often hard to discover, consequences.

So, last week, I started running at least the basic tests that Rails generates as you are building an app.  I had to spend time dealing with the authorization and role based aspects of the app, which are a challenge to deal with in testing, but during I have already reaped the benefit of testing.  I was amazed, for such a new app, exactly how much dead code, and ill conceived design decisions had already crept in.  Writing test cases forces you to really think, at a detailed level, about how the application will respond to the user.  I haven’t even started the process of testing the UI, but I plan on it, because this application is intended to go into production relatively soon with a narrow range of functionalities and then to expand over time.  Using a more test driven approach should help this growth occur in a sustainable manner, and hopefully push the redo horizon out a bit further (the point at which the Technical Debt grows to a level that causes Technical Bankruptcy).

The Wilting of appFlower

I thought it would be useful to provide at least a brief postscript to the appFlower evaluation.  We opted to leave it behind for most of the reasons I stated (in addition, when I wrote that post I had just submitted a couple of major questions/issues, to which I have not received a response, so the lack of support is seriously underscored).  I’d have to go back and look at the relative time, but as I suspected, in probably no more than 10% of the time, I had a basic app running. 

Most of the additional time I needed to spend was in upgrading my Rails IDE (yes, I’m lazy enough to use one), the version of rails on my machine, along with a number of other supporting tools, and then attempting to get the app running on my provider in a shared hosting environment.  This last bit of time was useful, in that getting used to Capistrano for deployment was something new.  I was able to get everything configured so that I could move all of the correct components to production when ready, but was having trouble getting the fastcgi script configured (the version of passenger that my host uses is way outdated, and the only alternative I have is to pay for a virtual private server so I can install everything myself).  Bringing up appFlower on a server really would only take slightly longer than the time required to download the VM image, so this is an appFlower plus.  On the flipside, I do not like the sense of real time editing of a production applications, so I’m not exactly sure how you can successfully avoid this.

The other large amount of time spent was on using a new authentication approach than I have before.  Previously I used authlogic, but there is a relatively new gem called Omniauth which has providers to allow for authenticating with everything from a system specific user/password to Facebook, LinkedIn, Twitter, OpenID, etc.  I was particularly interested in allowing a single user to have multiple authentication methods, so this took a time to figure out.  The authorization facility available in appFlower, if you can actually get it working, does not allow for third party authorization.  There are tools that allow you to do this with Symfony.  This, of course, means that to accomplish anything significant you need to work in Symfony, not appFlower per se.  If I was a php/Symfony expert, perhaps this would not be a big deal.  Of course, if I was a php/Symfony expert, I suspect that appFlower would be an incredible enumberance, and I wouldn’t use it in the first place.

The actual time spent on the app in Rails itself has been fairly minimal, and now I can move along adding new features and capabilities, with a nice look and feel to the whole thing.  I’ve also decided to focus more energy on using the Rails testing framework, which I will discuss in a future post.

My conclusion is that appFlower is marginally interesting but likely of little value over time.  While it looks pretty, it has significant limitations that require an actual developer to overcome, so its primary market of non-developers doesn’t look to be well served.