Monday, August 16, 2010

New Gig

Today officially marks my first day at SB Stone. I thought a post was warranted to talk about what we are trying to do and how open source, .net, mobile, all the fancy buzzwords are part of it.

I am the first full time dev in what the company is calling our "interactive" division. SB as a whole really has 4 main markets, staff augmentation, IT services, support and this new interactive group.

So what does interactive mean? Well... really all it means is we want to build a "studio" style dev wing where we do both project work for third party clients, as well as build our own products to market and resell. We're hoping the project works supports the dev time needed for our own things until those become profitable. Also as with anything some will hit and some will miss.

So why did I come over here? Mainly because the new group wants to build a new brand and image for the company, a large part of which will be participating and contributing to the dev and tech community. That means participating at and speaking at conferences, sharing knowledge through blog posts / twitter / user groups, all of it.

From a tech standpoint our group is looking at a mix of .NET and Open Source tools, specifically Ruby on Rails (although we have PHP resources as well.) We're also training up on mobile including iOS and eventually Android so a whole lot of different tools. My role from a marketing perspective is mostly going to be focused on the Rails side though. So hopefully in the near future I'll be going to more ruby conferences and events to talk to people about what we're doing and how Rails is playing in the marketplace.

The thing i'm most excited about though is building out a proper agile dev team. Earlier today we committed to following Kanban via Agile Zen for my first project, and hopefully many more going forward. We're adhering to test driven development, continuous integration and emergent design as well. The only "hot" agile practice we aren't going quite yet is pairing, and only for lack of collocated people (and remote pairing, in my opinion, sucks.)

So that's what I'm up to. If your an independent or even a full timer looking to help us out let me know. We're always looking for contractors to keep available and we'll hopefully be building up more of an in house team in the coming months. Even more importantly if you know of any small businesses looking for rails / php / .net help LET ME KNOW :-)

Wednesday, August 11, 2010

So You Want To Be A .NET Dev

Before I open the flamegates please note this post is a tongue in cheek reply to Kevin Gisi's excellent "So You Want To Be A Ruby Dev". I personally work in both platforms and they each have their pluses and minuses. Kevin's post is both scathingly hilarious and sadly true I hope this rings the same way.

So You Want To Be A .NET Dev

Fantastic, welcome aboard! The .NET community is a somewhat fractured but enormous place to be. Some of us follow test-driven development (and a library or two might have tests if your lucky), configuration over convention and the Alt.NET guys have heard of agile methodologies. We have documentation tools like... comments.... and xml comments... which means that if the devs who wrote the library your working on have read Code Complete it might not be a bunch of spaghetti code. So leave your hippy language and development stack, and come join the corporate monolith - maintaining job security by using only the best tools from Microsoft (tm).

So, let's get you started! You'll want to grab a copy of the .NET Framework. Which Framework? Well seeing as how your probably going to be building new apps and maintaining apps from 2001 you should probably just install all of them. 1.1, 2.0, 3.5 and 4.0 are all available to download on There's also 1.0 but we don't speak of that, and mono but no one uses that. What's the difference? Well mostly features, we have a LOT of people to support after all so each new version shoehorns more and more stuff in until we can be everything for everyone. We wouldn't want to have to rely on the community to add new things... who would support it? Don't worry you can install all of them at the same time, they play nice, but I hope you have a nice big hard drive on your PC.

Oh, you're running OSX? Hmm... well, you could try install Windows? No.. well how about VMWare or Parallels. You know those virtualization products that make your computer work like Windows. IDE? Well that one's easy you're going to need Visual Studio, but which one? Well you could install Visual Studio 2010. With that you'll be able to maintain all those .NET 2.0 - 4.0 applications, but you're also going to need Visual Studio 2003 for those old pesky 1.1 "legacy" apps. That said you better hope your whole team is on 2010 otherwise your project files will get hosed and no one will be able to open your solution. What's a project or solution? Well let's talk about that later OK? Oh and you're going to want to buy Resharper, seriously all the best .NET people use it. It adds all the stuff to Visual Studio 2010 that Microsoft will "borrow" and add for 2012. None of us really use Macs...

Ready to get started? Alrighty, well which framework do you want to use? The two big web ones are Webforms and MVC. Webforms is typically used in huge applications for massive corporations that don't know any better. It has a great designer and you can build your entire high maintenance app without writing a single line of code, isn't that awesome? MVC is our newest pet project that all the big wigs in the "community" swear by. Its a distinctly Microsoft invention, you see they broke apart data access, application logic and view rendering and introduced unit testing. It's revolutionary, we didn't borrow the concept or project layout from anyone especially not those ruby on rails blokes. For windows client dev you can use WPF or Winforms but you should really use WPF. Sure it still feels like beta software after two major revisions, but its new and shiny and we promise not to depreciate it in a week in favor of something new. Speaking of something new there's also silverlight which is kinda like webforms and kinda like wpf and kinda like flash.

OK next up is our package management app. Hahahaha just kidding we don't have one. First off why would you ever want to use a tool not made by Microsoft? If you MUST there's lots of excellent control libraries our partners like DevExpress and Telerik have built, their sales staff would be happy to help you configure them. Sometimes different versions even play nice together. We have a really nice Word doc for you to read that explains all the dependencies you need to install, by next week you should be up and running.

Oh, by the way a lot of those third party libraries are now designed for .NET 4, but don't worry you can still use the old versions, if you can find them.

As far as database persistence goes well you'll be using SQL Server for sure. Best get that checkbook back out. Oh you mean code wise, well there's Datasets, Linq to SQL (but we already cancelled that), EF1 (but its universally panned) and EF4 CTP (but even our own MVPs admit that's 3 years behind what other communities have.) You could also use NHibernate but that's not built by Microsoft so good luck getting it by your CIO or MVP Architect.

Hmmm authentication? Well you'll probably want to use the built in authentication provider, and its actually pretty good it even supports security roles. Unfortunately if you're in MVC it doesn't really follow database conventions but it'll work.

Last thing, we'll have to set you up in our TFS system. TFS is this great version control tool Microsoft gave us that's light years ahead of their old tool SourceSafe, it only screws up a merge or loses information occasionally. You need to work offline? Well.. you CAN but you'll be prompted a few times to reconnect and sometimes your code won't merge back in to the server right. It's 2010 when would you not have an internet connection? Also just to limit those pesky merges we've turned on locking checkouts, we wouldn't want people working on the same file, that would be chaos.

Anyway welcome to the team, I'll swing by in a week and see if you've gotten anything done.

Tuesday, July 20, 2010

GiveCamp 2010

This past weekend I had the opportunity to be part of something the likes of which I had not yet seen in the Cleveland area. From Friday until Sunday the folks at Lean Dog Software in collaboration with a handful of organizers and sponsors hosted Cleveland's first GiveCamp.

GiveCamp is a national program that pairs up talented software developers, designers and other technical specialists with local non-profit organizations to complete technical projects that the non-profits would otherwise be unable to afford. This past weekend was the first time Cleveland has hosted one of these events and I think we did our geek brethren in other, much larger, cities proud. When it was all said and done over 20 charities were helped by over 100 volunteer making GiveCamp Cleveland the largest first GiveCamp ever. it wasn't quite the largest of all time, but maybe we'll take that record next year.

For those curious the weekend runs like this.

Friday night we were paired up with our charities and sat down with a sponsor from the charity to talk about their needs and what our goals for the weekend were. In most cases the plan for the weekend was mapped out in advance by a business analyst that was pre-assigned to each team. However, in the case of my charity, The League of Women Voters of the Cleveland Area, we came it knowing virtually nothing. You see the organizers had anticipated having enough people to help roughly 15 charities, but come the week of the event they had enough to add 7 more. The LWV Cleveland was one of the late additions so we had to scope and plan the project on day one instead of in advance.

Our sponsor Sherece was amazing. She understood our limited time frame and was willing to compromise when needed but also did a great job of explaining the purpose of her organization so we could build out the best site for her that the time and resources allowed.

From Friday night through all day Saturday the teams went to work, in most cases this meant building out and customizing a Wordpress installation for each charity. Something like 15 of the teams used Wordpress which is really a testament to the product and how flexible it is.

Throughout the weekend stand-ups were held to make sure the teams were on track. Also breakfast lunch and dinner were provided both Saturday and Sunday.

Sunday evening we trained out charities, presented the final projects and handed over the keys so to speak. The charities all seemed quite grateful for the work that was done, but in reality it was the volunteers who seemed most upbeat. I think I speak for all of us when I say we had an absolute blast using our abilities and skills in a way that we very rarely get to. I told some friends in advance of this weekend that I don't know how to build a house, but I know how to build a website, so getting a chance to do that for a great organization like the LWV was not only fun but an honor.

Just in case anyone is curious the site we built can be found at We had some issues connecting with their hosting provider so itll be switched over to their main site, in the next week or so.

Some thanks to hand out. First off to all of the organizers especially Mark Schumann who acted as MC and leader of the organizers, Andrew Craze who served as the developer lead and Jon Stahl who's company Lean Dog hosted the event on their amazing workspace (the old Hornblowers boat on lake erie.) Also thanks to the IEEE for acting as a non-profit sponsor of the event and Burke Lakefront Airport for offering up the entire terminal to be used by the campers when the number of volunteers overflowed the boat.

On a personal note I moved to Cleveland Ohio 5 years ago because of a job offer after college. When you think of Cleveland you don't really expect to find a vibrant and exciting community of software developers and tech enthusiasts. This isn't Silicon Valley or Boston or Austin. Let me just say, if you believe that like I did you couldn't be more wrong. Over 100 people showed up this weekend from dozens of companies all over Northeast Ohio. There was a range of expertise that ranged from SQL to .NET and Ruby, Coldfusion, PHP you name it and it was represented. This is a great city to live in, with great people and one hell of a technical community, and that friends is why I'm #happyincle.

Anyway Thanks again to all those involved and I can't wait to do it all again next year.

Friday, June 18, 2010

RCov w/ RSpec-2

So I've spend the last couple weeks updating an old Rails application to Rails 3 and RSpec 2, both of which are currently still in beta.

Early on I noticed that the new rspec doesn't quite support rake spec:rcov yet. The new core framework has the rcov flag but there's no default rake task. RSpec-rails also removed the spec.rake task from your rails project and centralized it into your gem install, which confused me to no end. So in case anyone else wants to run rcov on your code I thought I might post what I did to make rake spec:rcov work again.

First a couple prereqs. I'm using an RVM install of all of this with ruby 1.8.7 (only because 1.9.1 gave me problems w/ rails 3) so in the steps below replace the paths I give with the path to your gem installation folder and you should be good.

1) Open ~/.rvm/rubies//lib/ruby/gems//gems/rspec-rails-2.XXXXX/lib/rspec/rails/tasks/rspec.task in your favorite editor.

2) Add this rake task

desc "Run all specs with rcov" => spec_prereq) do |t|
t.rcov = true

That's it. Pretty simple.

A caveat. Right now this runs ALL of your specs in both your rvm ruby folder and your project. I haven't a clue why, hopefully someone smarter than I will comment. That means rcov take a while and when its done its cludgy. If someone does help me out I'll post an update, in the meantime this works.

Wednesday, March 31, 2010

Unit Testing Role Based Security w/ ASP.Net MVC

I recently started working on a little project management demo project in ASP.Net MVC. As part of that I wanted to make sure I unit tested as much as possible so that it could act as a good example of how to do things for other people in the future.

I ran into a bit of a snag when trying to verify that the right security roles are required on my actions. I found out from a few blog posts that ASP.Net follows the .Net 2.0 membership and role provider model, and that you can simple decorate a controller or action with [Authorized] attributes to restrict access. Pretty cool and really simple unfortunately not very testable. When you unit test actions in MVC you call the method directly from the test, bypassing the routing engine where these authorization checks are made.

Fortunately I found Paul Brown's excellent post on how to test that you have applied the right roles. I liked what he did but it was a bit to case specific for me so I made it a bit more generic. Here's what I came up with:

I took Paul's code and turned it into a series of three controller extension methods which verify if the entire controller requires authorization, a particular method requires authorization or if a particular method requires a given role (i have not yet written the obvious 4th case - an entire controller requires a given role but it should be trivial.)
public static class ControllerExtensions 
     public static bool RequiresAuthorization(this Controller controller)
           MemberInfo memberInfo = controller.GetType();
           var attributes = memberInfo.GetCustomAttributes(typeof(AuthorizeAttribute), true);              
           return attributes != null && attributes.Length == 1;
     public static bool ActionRequiresAuthorization<T>(this Controller controller, Expression<Action<T>> expression)
           var member = expression.Body as MethodCallExpression;
           if (member != null)
                 var methodInfo = member.Method;
                 if (methodInfo != null)
                       var attributes = methodInfo.GetCustomAttributes(typeof(AuthorizeAttribute), true);
                       return attributes != null && attributes.Length == 1;
           return false;
      public static bool ActionRequiresRole<T>(this Controller controller, Expression<Action<T>> expression, string role)
            var member = expression.Body as MethodCallExpression;
            if (member != null)
                 var methodInfo = member.Method;
                 if (methodInfo != null)
                       var attributes = methodInfo.GetCustomAttributes(typeof(AuthorizeAttribute), true);
                       var authorizeAttribute = (AuthorizeAttribute)attributes[0];
                       return authorizeAttribute.Roles.Contains(role);
            return false;

Here is an example test for Requires Authorization: (using NUnit)
public void Should_require_authorized_user_for_all_actions()
     var controller = new ProjectController(null);             

And a sample for ActionRequiresRole
public void Should_require_admin_to_add_a_bug()
     var controller = new BugController(null);
     Assert.That(controller.ActionRequiresRole<BugController>(x=>x.Add(), "Admin"));       

EDIT: John suggested changing the extension methods into lambda actions so that they would be strongly typed. I agreed so this has been updated accordingly.

Friday, March 19, 2010

Forcing Visual Studio to run as an Administrator

This superuser post explains how you can force visual studio (or any application) run as an administrator. This is helpful for Windows Vista and Win7 users who have issues with IIS when running Visual Studio as a non-admin.

Just in case the link dies here's the answer:

To ensure visual studio and any file opened w/ visual studio (i.e. double clicking on a solution file) always opens VS as an admin (from Vdex):

Got to the actual deven.exe in "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\", right click on devenv.exe, properties, compatibility and tick "run as administrator"

To set a particular shortcut to always open as an administrator (From Jared Harley):

  • Right-click on the shortcut (this works even in the Start Menu)

  • Select "Properties"

  • Select the "Compatibility" tab

  • Click the "Change settings for all users" button at the bottom

  • Enter the administrative username/password

  • In the new window, select "Run this program as an administrator"

  • Click OK

  • Click OK

Wednesday, March 10, 2010

Making the Switch: VMWare vs Parallels

A few years back when I first make the switch from Windows XP to OSX 10.4 (at the time) as my default platform of choice I knew there would be a few challenges. Mainly I knew that being a .NET developer there were a number of applications I really needed to run that didn't exist on the OSX side of the equation, specifically the Microsoft Development Stack.

At the time that Apple switched from the PowerPC platform to Intel a competition sprung up to see who could be the first person to get a copy of Windows running on Apple hardware. Eventually a bunch of hackers solved that problem. Shortly thereafter Apple released Boot Camp, it's semi-official (Beta) take on running windows on a Mac. Then Parallels threw their hat in the ring with a virtualization product that in no simple terms kicked ass. Finally a few months later veteran virtualization powerhouse VMWare brought out VMWare Fusion to compete with Parallels.

Today the offerings from VMWare and Parallels are very similar. They both offer Windows 7 support, a coherance / unity mode which breaks windows applications out of the windows desktop and several other bells and whistles you would expect from a fully featured product designed to help switchers.

One of the questions I get asked frequently by people looking to switch is which one do I recommend. The truth is I recommend both but I use VMWare Fusion.

Ultimately there were two reasons, at the time, that I chose Fusion:

1) At the time (and still now I believe) it managed RAM better than Parallels.

Specifically it didn't pre-allocate all of your allotted memory up front. What that means is when you set up a virtual machine you tell it how much system memory it can use. On my 4GB machine i allocate 2G to my VM.With Parallels when you allocated 2 gigs the virtual OS takes all 2 gigs up front and then the virtual OS manages it. Basically its like your running 2 machines and each has 2 gigs of ram.With VMWare if you allocate 2gigs it will give the virtual OS as much memory as it needs at any given point in time UP TO 2gb. After that it begins swapping to disc like any normal OS would. So if your not doing much in your VM OSX could retain 3gigs to use as it sees fit up until your VM needs that extra gig.

This comes in handy when your doing a simple windows task like testing an application in IE or opening word to work on a document. Usually windows wont need all of the memory you give over to it leaving iTunes and all your OSX apps free to eat up that space.

2) I work for a company who uses VMWare in our server infrastructure. We run VMWare ESX servers which means we have a number of VMWare formatted VMs. Fusion is able to open and run many of these VMs which makes life easier if I want to copy over an image of a particular server setup. Really this is the reason I broke out the credit card one day and bought Fusion. But if i were starting from scratch all over again I would go with Fusion from the get go because of number 1.

That's pretty much it, in every other way the two products are competitive and to be honest I'm not even sure if number 1 is relevant anymore (by all means if its not please leave a comment.)

It's worth noting there is a third option, Sun's VirtualBox, which I have not personally tried. I have heard good reviews but I've also head it lags behind the commercial products in feature set so if you can buy one of the commercial ones you should.

Making the Switch: Intro

Three years ago I bought a Macbook Pro for my own personal use. I envied OSX's BSD based core and seemingly fluid UI for quite a while. At the time I spent most of my time outside of my day job working on PHP projects and while PHP works well on windows I always had a disconnect when I inevitably uploaded my projects to a Linux based server.

Secondly while the notion of toying around with hardware, drivers, operating systems, debugging issues and keeping my machine clean of viruses, spyware and other nefarious applications was something I didn't mind and even secretly enjoyed in high school or college I seemed to have grown past those days. Every little configuration issue or accidental bad download on Windows cost me time I didn't care to spend anymore. I'm not nieve enough to think OSX is free of problems but the relativley tech savvy people I knew who had made the switch swore by it in terms of increased productivity.

It was for these reasons that one day I went on eBay, found a first generation Macbook Pro and took the plunge. Through the help of some friends I figured out how to adapt to the subtle differences between Windows and OSX and now three years later I'm on my second macbook pro and even switched my wife over to the platform a little over a year ago.

So, what is the point of all of this? Why am i recanting my love for all things Apple to you dear reader? Well this post is meant to serve as an introduction to a series of posts I intend on writing on what it's like to be a programmer, specifically a .NET programmer, who spends most of his time in OSX. I'm going to go over the tools I use (like why I chose VMWare Fusion over Parallels) some tricks I've picked up and anything else that comes to mind. First off thanks to John Miller for the idea for this series. I had a bit of writer's block this morning and he proposed I write it given his current situation of anxiously awaiting the release of the next round of macbook pros.

Monday, March 1, 2010

Intro to Agile (Part 2)

In the last post i went over the first half of my presentation on agile software development. We talked about the purpose of agile and how, at a fundamental level, it's all about the way you think and not the things you do. That post was WAY more important than this one so if you're starting from scratch in agile I highly recommend reading that first.

Today I'm going to cover the various tools, practices and patterns people apply to try and be an effective agile team.

Agile Project Management

There are three overarching project management techniques that are applied to agile projects. They differ in their implementation but all focus on the same concept of continuous feedback and close collaboration. They are:

1) eXtreme Programming (or XP) - This was the first technique I heard of and in some ways it is the "strictest" of the agile methodologies. In focuses on short development iterations (or sprints), a series of planning meetings throughout the process and sound development practices like test driven development and paired programming.

2) Scrum - Scrum is an incredibly popular technique among enterprise agile implementations. It is very similar to XP in day to day implementation but also covers the scenario where a product is being developed by a large team and how you would go about managing that. Specifically scrum speaks about breaking large teams into smaller scrums then developing some sort of process (called a scrum-up) where the smaller teams report to a higher organization to share information. Development practices are more left to the individual implementations to decide but TDD is generally encouraged in all of the agile implementations.

3) Kanban - I perceive Kanban to be one of the newest options but it is getting a lot of traction. It emphasizes a continuous flow of new functionality with intermittent triggered events that might bring about planning meetings or retrospectives. It is heavily focused on pulling story cards through the process represented by a Kanban board the team views the project status from.

Which of these options you choose is up to you and your business. For the type of project work we do at my company Kanban seems like the best fit, but I have worked for in house IT departments where Scrum or XP was perfect. As with most things in agile the key is being flexible.


One of the common practices that occurs in any of the methodologies listed above is storycarding. Put simply storycarding is the act of breaking up an applications requirements in to small chunks and framing the description of that requirement from the perspective of the intended end user. Put even more simply its turning your product into a series of small stories.

With storycarding we have a few rules:

  1. Each card should tell a story about how a user will interact with the system

  2. A card should have some business value

  3. Each story should have as little complexity as possible

  4. Stories should consist of as little detail as possible

Numbers 1 and 2 above leave us with a problem. If every story has to be about the user and they all have to have concrete BUSINESS value how do we account for tasks that are purely IT driven, like "Setup infrastructure" or "Upgrade to .NET 4.0?" As with most things in our profession the answer is "it depends." Purists would say that those tasks need to be taken inline with some other business focused task that drives them. For example your very first story could be used as a catalyst for "setup infrastructure" because without that infrastructure that first story can never be released and thus is never complete. Some more practical teams opt to put in technical story cards and negotiate with the business time and prioritization of those tasks. Ultimately this is up to the team.

Numbers 3 and 4 go hand in hand as well. We want out stories to have little complexity so that they are each to test, complete and assign. We don't want to have stories that take weeks to finish and we don't want a lot of overhead in order to get started on one.

The reason why we favor less detail in a story is to encourage conversation and collaboration. The more we put in writing in advance of the work beginning the more likely it is that we will need to change the details later on, most likely after the card has already been completed as written. Instead a storycard is treated as an "invitation to have a conversation" with our business partner.

Managing Work-flow

So knowing how agile teams usually generate application requirements is one thing, but what do you do with all those story cards?

This is an area where the different agile approaches differ. XP and Scrum are generally in line so we'll talk about their approach first.

XP and Scrum focus on working in short sprints (also called iterations.) In this system work is typically done in 2 week spurts of activity (although the actual amount of time varies by team.) Each sprint follows a cycle that consists of a sprint planning meeting, a period of work and collaboration with the business followed by a retrospective, then the whole process starts again.

Sprint Planning Meeting

A sprint planning meeting is a time period where the team evaluates how they are doing so far (retrospective) and plans out storycards for the next sprint. Typically the business will assign priorities to cards and the cards will be scheduled for the sprint in priority order. Exceptions to the priority order rule occur when one card is dependent on another or the business, having seen the progress the team is making, decided to re-prioritize a card during the SPM.

The actual process associated with an SPM varies by team but in general it begins with a retrospective on the previous sprint. It then involves someone reviewing what tasks in the previous sprint were finished and which ones need to be rescheduled, potentially into the upcoming sprint. Some teams then opt to do a live demo of the currently in development system. Finally the business works with the team leadership to prioritize the remaining cards and schedule them into the sprint. XP uses the concept of "yesterday's weather" when determining how many stories make it into a sprint. Basically this means that however much work was accomplished in the previous sprint if the amount of work you anticipate for the next sprint. Any remaining stories are placed in a buffer to be picked up in priority order if all of the scheduled tasks are completed.

Release Planning Meetings

Scrum and XP also have a concept of a release planning meeting or RPM. An RPM occurs in advance of the kickoff of the project as a whole and is used to plan out how the major areas of functionality will occur throughout the course of the project. The estimations and schedules that come out of an RPM are really fuzzy but they give the team an roadmap of sorts as they progress throughout the project. Some teams will create storycards in advance for the first few sprints and pre-schedule those cards in the RPM. This process works well enough so long as the result is seen as flexible. Those schedules will change as you learn more.

The Kanban Approach

Kanban takes a different approach from Scrum and XP. Instead of having prescheduled meetings and working in short sprints Kanban focuses on a continuous pull of new functionality into the system. Reviews, retrospectives, demos and planning sessions still occur but they are triggered instead of scheduled.

A storycard in Kanban begins prioritized and ready to be worked on. Team members pull cards from the ready bucket into the development bucket, as they are finished they move to QA, staging and deployment (or any number of other buckets depending on the team.) New cards are planned when the ready bucket gets low. What "low" means is up to the team. For one team they might decide that when there are only 4 stories pending they need to sit down with the business and plan out more stories. For another team that number might be 20. The same applies for demos or retrospectives. One team I saw triggered a formal retrospective when enough post-it notes were left by team members on a "retrospective" board. So as team members saw good, bad  and ugly things happen they would leave notes on the board (anonymously if they choose) and when a certain number were there they would all sit down and review them.

The key thing to note here is that Kanban really doesn't "stop." There is no break to plan an IPM and then restart work. Team members keep building for as long as there is work to be done.

The Daily Standup

The final meeting that seems to be consistent across Scrum, XP and Kanban is a daily standup. A standup meeting is a short meeting held daily where all of the development team members and sometimes testing and business will get together to go over the current state of the project. In general everyone in the team should be able to answer three questions.

  1. What did I do yesterday?

  2. What am I doing today?

  3. Do I have any roadblocks?

Should a team member encounter any problems the team should match people up to overcome those obstacles and move on. In general an ideal stand up is no more than 15-20 mins long.

Sprint 0

There's a lot of debate as to what an Agile team can do in advance of a project. The notion of pre-planning and pre-work has come to be referred to as "sprint 0." Generally the point of Agile is to adapt as time goes on and to not make decisions well in advance of when they need to be made. The notion of a period of pre-planning and decision making seems to fly against the goals of an agile team, but practically there will always be a number of tasks that must occur before the project can get rolling.

The rule of thumb for Sprint 0 is to not do any work or make any commitment to a course of action that could change based on the businesses requirements. If a task is aligned with the requirements of the project it should be accounted for in a storycard and prioritized into the project.

There are a number of activities that do make sense though. You can identify team members, stakeholders and set expectations for the project. In most situations you can decide on an overarching technology, for example if you tend to be a .NET shop you might choose between webforms or ASP.NET MVC. Finally you might identify the very large functional areas you want to hit on for this project, i.e. you might say "we are working on a billing and inventory management application that ties into our enterprise ERP system."

Wrap Up

So that's the majority of the content from my presentation on agile to my coworkers. The Slides have a bit more information if interested.

In my general experience agile is a concept worth learning about. The teams I have been on have had higher quality products, more tightly knit relationships within the teams and have ultimately delivered a result that the business was very happy with. Agile may not work in every environment but if you can make it work in your organization I would recommend giving it a try.

Monday, February 8, 2010

Intro to Agile

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Last Thursday I had the opportunity to present agile development to most of my team mates at Paragon. For some of them it was the first time being formally introduced to the concepts  and practices and I'm happy to say it seemed to go really well. I promised a followup post on my site summarizing what was covered and providing a copy of my slides. This is part one of that post. I will provide a copy of the slides on both posts but for this first post I'm only covering the first half of the presentation.

Agile is a state of mind

There's a misconception among a lot of developers that agile is the sum of the practices, tools and processes that make it up. In other words to many people agile is doing test drive development, pairing and working in sprints. The problem with this view is that it really misses the fundamental reason why agile came to be. In fact when the agile manifesto was written it explicitly included the phrase "Individuals and interactions over processes and tools."

Agile is about how we interact with our team and our customers. Agile is about how we embrace change. The tools we use to accomplish those goals are simply a means to an end.

There are four points that, to me, making up a successful agile team.

Delivering the right product at the right time

The right product is composed of the product that delivers the most business value to the customer. Agile teams do this by allowing the business to prioritize all the work and performing that work in priority order.

The right time is when the remaining tasks no longer convey a competitive advantage to the business. We identify when this is by showing the product to real people (users, business, stakeholders) as early as possible and acting on the feedback we receive from those people.

Focusing on Communication and Collaboration

Agile teams work in close quarters. All the members of the team (project management, development, testing and the business) should all be in close physical proximity if at all possible. This allows the free flow type communication that teams thrive on. It is true that remote teams can accomplish great things in an agile manner but collocated teams are preferred.

Agile teams work with an embedded business representative. Ideally this individual is an actual employee or stakeholder of the company the product is being built for. Unfortunately that isn't always possible. In their absence any individual who can make an authoritative decision on the functionality of the application and who can set task priority can represent the business. Regardless the customer needs to be involved in key planning meetings and demos.

We strive to create short feedback cycles. This means developers working closely with testers to test the code they write as it is completed. This also means working with our business customers to show them what we build as we build it and getting feedback on it. Short feedback cycles allow us to act on problems or inconsistencies and respond to shifting priorities quickly. "Throwing the code over the wall" to QA is no longer acceptable.

Finally we learn from the past. We constantly evaluate how we are doing and adapt quickly to make sure the project is successful.

Embracing Change (even late in the process)

Agile teams deliver what the business needs and not what they think they want.

Traditionally changing the requirements of a project in midstream is frowned upon. Its scope creep or against the design and makes our jobs as developers more difficult. We introduce elaborate change management processes to punish any poor business person who should dare change our requirements.

Agile teams move past that mentality. We recognize that as we build our customers product their priorities will change. As they see the progress we are making some items will become higher priority than other and sometimes entirley new areas of functionality will become apparent.

The result of embracing change is a happier customer and higher acceptance of the product. The real goal though is delivering the right product. We know its the right product because the customer has been involved the entire time, has shifted priorities as necessary and has told us that there is no more work to be done to deliver the benefit they need.

Deferring decisions till the last possible moment

We can't predict the future but we love to try. Agile teams defer decisions until they have as much information as possible to make the right one. Very rarely is this the beginning of the project. This means we generally don't write comprehensive design documents or requirements documents well in advance of the development process because we don't know enough to accurately write those documents.

As developers this means only introducing software design elements we need. Agile has a phrase for this - YAGNI or "You ain't gonna need it." Generally if we need something once we build it in the simplest way that could possibly work, if we need it again we refactor our solution to be more flexible or anticipate future changes.

To be continued

So the message of the first part of this presentation is that agile is inherently philosophical. It's how we look at a project and how we interact with our team.

In the next post I'll cover the second half of the presentation, which goes over some of the practices that are used to help run an agile project. If you want a sneak peek here's a copy of the slides.

Into to Agile (PDF)

Monday, January 25, 2010

Tools and Techniques I Recommend to Clients

I've been doing full time consulting work for about a year and a half now and for the most part I love it. I get to go into new organization with different needs and concerns, meet new people and try and improve their project, process or just in general help contribute to their business needs. Every client is different but there are some universal tools, practices or concepts that most anyone can benefit from. When given the opportunity to express these areas of improvement here are the things I generally talk about. It should be noted that the tools on this list are mostly geared toward .NET as that's what most of my clients use but some of the concepts are universal.



  1. JetBrains Resharper - This tool is first on my list because in terms of productivity gains from tooling Resharper may be the single best thing your money can buy. Microsoft has made some serious strides in bring refactoring and TDD support into visual studio but in my opinion they remain second class citizens especially when you compare the out of the box support with what Resharper gives you. To be honest Resharper could sell JUST search by class name (Ctrl+N) or Search by File Name (Ctrl+Shift+N) and i would buy it. The upcoming version is said to provide better support for ASP markup, HTML markup and Javascript and I'll be buying it on release day.

  2. Balsmiq Mockups - Among the private or small-cap development community the concept of using pen and paper or intentionally "messy" mockups has become a bit more prevalent. Unfortunately in "corporate america" my experiences have been that the business still expects a mockup to look like the final product. This results in somewhat rushed layouts becoming law and UX suffers. Balsamiq is a great mockup tool for communicating the general flow of an application without setting expectations to the point where they are no longer malleable.

  3. JetBrains TeamCity - I'm going to talk about Continuous Integration and having a build environment in a moment but in terms of tooling for the concept TeamCity is excellent. It's a fairly straight forward UI and eliminates a lot of the configuration "older" tools like ThoughtWorks CruiseControl .NET required.

  4. ThoughtWorks CruiseControl .NET - Ok so i just took a shot at CC.NET by saying it required a lot of configuration, and that's true, but it's also the most mature build system out there for .NET. In my opinion (and limited experience) Microsoft's build offering can't hold a candle to the community support and stability of CC.NET. I've seen it used in single team shops and giant enterprises. Oh and did I mention it's free?

  5. nAnt - This item may fall off the list soon in favor of Rake if Kevin's experiences stay consistent. nAnt is essentially the scripting language you will use if you use CruiseControl or TeamCity. You could use MSBuild and whatever platform MS set up for TFS but i find nAnt has better community support.

  6. NUnit and RhinoMocks - I'll write more about these when I talk about TDD, but in my experiences they remain the best (and most stable) tools for testing on .NET. MSpec is an up and comer that could shift my direction.



This section is going to sound like an Agile To Do list but that's mainly because a lot of the practices that people think of as "agile" are actually just a good idea regardless of your actual development methodology. Some of these seem like no-brainers but it's amazing how many organizations still lack some of the most basic and most established concepts in software development.

  1. Source Control - My first item is my most important. If you are doing file system source control, making copies of your application and praying to whoever you pray to that you don't lose something you are doing it wrong. There is no "but i don't like checking things out" or "but its so much work to maintain that repository" or even "but I'm the only programmer" your doing it wrong. You should not being writing a line of code without a repository set up and ready. You also have no excuse,there are cheap or free hosted options or you could always set up subversion, mercurial, git or even TFS and do it in house.

    • Distributed Source Control - I would recommend a system like Git or Mercurial for managing your source code over a system like TFS or Subversion (or god forbit SourceSafe.) All of those (except maybe SourceSafe) are great systems and offer a huge benifit especially if you aren't using source control yet, but today the latest and greatest (and for a reason) are the distributed source control systems. In short DVCS is better because it offers easier branching with less cost, less down time, your people can work away from the network and still have full history, commit and branching and if anyones repository gets messed up everyone on the team has a copy of it. There is a much smaller chance of failure.

    • Non Locking VCS - This is a HUGE pet peeve of mine and I still don't understand why some systems default to file locking (I'm looking at you SourceSafe and TFS.) When you lock files on checkout you ensure that no one else on the team can work on that file. That means if I need to make a change to fix a bug or add a feature I have to either email my change to a teammate and ask them to make it, mark the file as writable and change it anyway then hope i remember not to overwrite my changes when i eventually can check it in, or bug someone to check in their code in a potentially incomplete state so i can make my change. Its a huge pain in the ass and it slows me down. Also for every bug i need to fix i probably spend 80% of the time finding the bug and 20% fixing it. If i spend that 80% of the time, find out I cant change the code and move on to something else in the meantime guess what, I'm going to spend that 80% again finding it when the file is checked back in. HUGE productivity loss.

  2. Some sort of project management approach - Preferably a flavor of agile, but in this category even waterfall is better than the ad-hoc "hope the project is done on time" approach. It's kinda remarkable but I still run into clients that don't really do anything with regards to project management. Essentially a business analyst or business person goes to the development team and says "i need this built by this date" and people start typing. Scope and priority changes but in a "your half way through this and now you need to stop and work on this" manner and not in any sort of controlled way. Code is lost, productivity falls and in general it costs the organization way more than it should.

  3. Continuous Integration - CI, put simply, is the idea that you should have some automated way to compile your code, execute metrics and tests against it and provide feedback to the team if it is still in a good stable place. In some organizations all you have is the compile step, that's ok set up a CI server anyway. It's worth it. People have a tendency to check in bad code when no one is watching, if the build goes red they know someone is watching. On a related note my next CI server will be named B1g Br0th3r

  4. Test Driven Development - Books have been written about this topic but basically TDD is the notion that you should write a test for your code before you write your code, then write the code to make that test pass. It ensures that every line of code you write is adequately tested on a logical level (it doesn't get you out of manual system testing but it should cut the time and resources needed for that if done properly.)

  5. Automated UI Testing - This one is a bit controversial as of late but I'm still recommending automated UI testing if a client has a technical QA team member who can maintain the test. These type of tests essentially drive the UI of your application to attempt to determine if it is executing as expected. They ARE fragile (software changes will drive changes in these tests) but if done by your testing team they provide a good feedback loop between development and QA. They also hit a layer that TDD usually misses. I would NOT recommend paying for tools, if your doing web dev Selenium and Watin are great and free. Also for a group with no TDD experience sometimes this type of testing can be an easier addition to an existing codebase but please don't get comfortable, learn TDD.


The Rest

Some quick props and a disclaimer.

Redgate SQL Tools - Best SQL Server Toolkit for DB types.
Beyond Compare - A great file comparison tool for windows.
SnagIt - A great Screen Shot application for windows.
AgileZen - An excellent online Kanban board build by a NorthEast Ohio local. A lot of people love pen and paper for Kanban but if your working with a distributed team AgileZen is the best I have seen.
Basecamp - Another great project management tool by the inventors of Ruby on Rails.

Disclaimer: Redgate, Jetbrains,  Scooter Software, Techsmith and Balsamiq all provided swag for Technic picnic I held last summer. That said their products rock and that's why I contacted them asking for a handout in the first place.

Monday, January 11, 2010

Live to Code

For Christmas my wife bought me a copy of Michael Symon's new cookbook Live to Cook: Recipes and Techniques to Rock your Kitchen. For any non Clevelanders and Food Network aficionados Symon is the head chef and owner of several successful restaurants in Cleveland and Detroit and is also an Iron Chef on Iron Chef America.

Now, I don't consider myself a chef and I don't regularly read cookbooks. In fact the gift may have been a not to subtle hint that I need to bone up on my culinary arts. That said I have taken to Cleveland and enjoy reading a bit about local people doing what they love and the various landmarks the city offers so I opened Symon's book and found something slightly unique. Hidden between the various recipes and techniques are some short descriptions of his inspiration as a chef. What drives him to do what he does and how he just loves to cook. It got me thinking about programming and some conversations I've had with friends in our local community.

In the book Symon speaks of passion, and how in his career thus far the most successful people he has worked with haven't been the best trained or the people with the most experience on the line but rather the people who were most passionate at what they do. It's an observation that carries through to programming and I imagine any other career where people take pride in the craftsmanship of what they do. In my, admittedly short so far, career I've had the opportunity to work with some incredibly talented and bright people and they all had one characteristic in common, they were passionate. You could get them talking for hours ad nausium about software and what makes clean code. How to best structure an algorithm or what new and exciting technique they were learning now.

In the book Symon tells a story of his first restaurant after school. He recalls how it was a small  30-40 seat place and he worked under a head chef who was self-educated. Despite the individuals lack of "formal" training Symon speaks to how much he learned from the man and how he gravitated to him as a mentor because of the passion he showed. His examples are things like an insistence on using fresh garlic and not pre-peeled garlic despite the hours of prep time required and manually roasting peppers instead of purchasing them preroasted. In other words, the details are what mattered to this individual and Symon maintains that he adheres to those same concepts in his restaurants today. This to me is analogous to the Test Driven and Behavior Driven Development movements. In our profession we spend time identifying the details of an application in advance. Planning it out in essence by writing tests against how the code should function. By doing so we ensure a cleaner more quality driven end result.

From a business perspective the analogy continues. In software you have your McDonalds and your Lola's and there really is nothing wrong with being either one. The key is in knowing who you are. If you want to be the Lola of the software world you're going to be focused on quality, delivering the best product your customer asks for and your work will demand a premium because of it. If you're McDonalds then your going to deliver quickly and afford-ably but it may not be as good as the premium competition.

That said I'd much rather frequent Lola than McDonalds.

Wednesday, January 6, 2010

Sometimes ASP.NET Baffles Me

Fair warning, this post is going to be a bit of a rant about ASP.Net and how it does nothing like every other web language on the planet.

So far today i've spent about six hours trying to come up with a solution to one of those problems that seems like it should be oh so simple to solve. As of this point, I got nothin'.

So the client I'm at these past couple months uses a third party set of ASP.Net user controls to handle a lot of AJAX tricks and visual prettiness. The only problem is from what I can tell they don't work. They don't render correctly in Firefox, occasionally don't render correctly in IE and cause some serious issues when paired with jQuery. The problem i was tasked to solve was a scenario where when using this third party controlset's popup box (essentially a floating DIV) it could cause the rest of the content on the page to fall down and be essentially invisible. To make matters worse the popup itself showed up below the fold, the scroll bar was disabled and it was modal, so there's no way to get back to a usable state short of refreshing the browser. After a couple hours playing with the control I decided it would be easier to pull the control out completely and go to the quite excellent jQuery UI dialog plugin. Once I applied it to the mischievous popup the UI worked and all was well... or so i thought.

A day passed and our tech lead came over. He asked if I could convert the rest of the popups in the application to jQuery for consistency and so that the issue wouldn't reoccur in the future, so I started doing that. Things were going well until I ran into another of this toolkits controls, CallbackPanel. For all intents and purposes CallbackPanel is like an update panel except it is invoked directly in javascript. I started applying the jQuery work to a popup containing one of these panels and noticed an odd trend. It behaved correctly when executed the first time, but when run again without a page refresh the javascript would execute but the server side code was never called. As of yet I have been unable to resolve this and have a pending support ticket with the vendor. My theory is the event is somehow tied to where in the DOM the panel original resides. When i put that panel in a jQuery dialog it is updated in the DOM and that breaks the event.

Since i have little faith in the vendor's speed to fix this (or tell me its intended functionality) I opted to move ahead and try and migrate the whole section into a simple AJAX call back. I would make a call to a WebMethod and return some markup from the user control that would then be injected into the popup and voila.. ajaxy goodness. This is how i would approach this on just about any other platform. Alas, .NET is not any other platform.

The first issue i run into is that WebMethods have to be static. That means I cant access the usercontrol already on the page.

Ok no problem, I'll instantiate one and get the rendered HTML, pass that back to the client.

Try instantiating the class, no go. Turns out you need to use LoadControl (a non-static method of course) to create an instance of a user control otherwise all the child controls don't get instantiated.

Crap, ok Mr. John Miller find this article for me. I create a page object, load the control, use reflection to set various properties and even isolate all this into a class called UserControlRenderer for reuse. Things are looking good until I run it.

Error executing child request for handler ‘System.Web.UI.Page

Son of a bitch are you kidding me? Do a bit of digging and find out that you cant render a DataGrid (or GridView) unless it is contained in a form with runat=server set. Which means this approach of rendering just the user control to a string and returning it won't work for my implementation because I'm using one of the most common controls in .Net.

All told I have about 45 lines of code to do nothing but try and render the control (Set up, instantiation, the class that does the rendering, etc) and I have to tap into reflection to do it. Oh and it doesn't even work.

For those of you that haven't played with any languages outside of .Net here's how much code I would need in say Ruby to render a single "user control" (they are called partials in rails)

render :partial=>"mycontrol", :layout=>false

One line... one line of code. And it works.Of course I have to get the data and set it to a variable accessable to the control, then render out the html... but I have to do all that in .Net too and I didnt count those lines in my 45 above.

From day one the ASP.Net team assumed that their target audience was a group of people who didn't understand the web. Didn't get the general stateless nature, or how HTML works. That probably made a ton of sense in 2001 when .Net was young, people were used to writing VB windows apps and there was a push to migrate those application to the web. It 2010 it makes no sense. If you are web developer and don't understand how the web works I dare say you are a quite incompetent web developer. Thank god for the push to ASP.Net MVC, here's hoping it takes over in enterprise soon.

Ok rant done. Next post will be a bit more productive.