by Jackie Cohen, Local Orbit Software Engineer
As a software engineer at Local Orbit, of course I write code – but that’s not all. I do larger scale product development tasks with the team as well as smaller tasks, including, data management for development of team and customer reports, and creating documentation for our databases and our software. I work with the rest of the development team to engineer the software that helps us move forward as a company.
When there’s something new I want to build, I consider: the why, the where, how fast, how well, and what. The following takes you through my process as an engineer at Local Orbit, developing new features in our application.
When I begin work on a new feature, the first thing I consider is why we are building it. To start with, we use user stories in an issue tracker to understand why. For example, a story might be “A market manager should be able to edit the price for a single item in an order if the order has not yet been paid for,” with some comments included that explain details of why and how customers may want to do that. We’ll have succeeded with the feature when that is a true story.
Often, there are various behaviors desired by customers and/or by the rest of our product, support, and sales teams that a single new feature is intended to fulfill or ameliorate. To prevent breaking existing functionality for any of those people while adding something new, I have to understand these whys.
Sometimes new features require adding a block of new functionality somewhere in our existing Ruby on Rails application, but sometimes they require changing a piece of an existing part of the app, or altering how data flows through the system. I need to figure out what part of the application I’ll be editing.
Our team’s process lends itself to conversations that lead to further discovery for development. These discussions help build bridges between features and ideas, and they are folded into our sprint planning meetings, our overall team meetings, and pair programming between developers that happens as needed.
Having gained this “where it goes” understanding, I add a series of initial tests to our automated test suite that, at this point, should fail – using test-driven development. Making sure that we have automated tests throughout the development process allows us to make our future jobs easier, because it will allow us to see what we intend to happen if we go to change this feature later on. It also ensures that our customers will experience as little surprising behavior as possible.
I also need to determine my algorithm for solving the problem at hand, figure out how efficient my code will be in production, and try to improve the overall speed users will experience when they use the software online. I’m also concerned with how easy will it be to alter or add to in the future.
This is where step (1), why am I working on this, is particularly important. Understanding the different ways that users may want to use the new functionality I’m building helps me make decisions about tradeoffs. There’s a delicate balance between writing extremely modular code and getting a solidly built feature to production for customers to use as soon as possible, and both are important to us. Answering the “how fast?” question is really asking how we can succeed on both those counts.
Once code has been written, and it all seems to work, my attention turns to the specifics – how well has the feature been implemented? This involves using our extensive automated test suite to fill in gaps my initial tests may not have covered. I’ll continue to discuss the feature with the team and receive their feedback and input about my approach.
At this point in the feature cycle I want to ensure that I completely understand the capabilities and limitations with respect to the rest of our software. Can other engineers understand what I’ve written? Is it clear what manual testing and quality assurance (QA) processes are needed before the feature is released? Once the team has finished their review, the code is moved to the master branch. The next place for the feature code is our testing environment.
Finally, I’ll proceed to a final individual code review from another developer. When we OK everything together, my new feature is tested by other team members in our staging environment (an internal site, where features are tested right before customers see them). The last step of my process is a set of mini-meeting conversations in which I confirm again that everything makes sense. Can I document what I’ve done clearly? Do another developer and the Product Manager understand what I’ve done? Does the feature fulfill the user story that we started out with? If the answers to all those questions are “yes,” we’ll run the automated test suite a last time. On “pass,” we’ll open the new functionality to users, and I’ll begin the feature cycle again.