06 1 / 2014

Tell a Different Story turned 1 today!

Tell a Different Story turned 1 today!

(Source: assets)

26 11 / 2013

Frustrating day yesterday in the world of ClojureScript.  So frustrating, in fact, that I’m writing a blog about it to spare the next poor soul who struggling with the same issues Nhu and I did.

I find my love/hate relationship with Clojure(Script) interesting. It’s beautiful code, well-named methods, succinct, but the errors you get when things go wrong are just….sooooooo vague. It makes it a frustrating experience to say the least.

Now let me back up a bit and take you through our journey yesterday…

Overview

Nhu and I are working on a client project that the team wishes was all Clojure and ClojureScript. The Clojure-side has been exciting. Like I mentioned above, the codebase is gorgeous, succinct and easy to read. It is so nice working on that side of things!

On the other hand, Nhu and I have been setting up the ClojureScript side of things and on that side, the “dark side” if you will, things haven’t been so smooth (my guess is lack of experience plays a huge role here). And the term “haven’t been” in the sentence above = 2.5 days with little progress.

To say we are frustrated is an understatement.

Friday, we spent a good two hours just getting a basic ClojureScript test to work. The test read like this:

(it (should= 1 1))

Thanks to Micah and a lot of configuration changes, we left Friday with the test passing (guess our logic was right after all /s), versioning ironed out and were finally ready to write some real code Monday morning.

Monday, 9:30am

We started off the morning by creating a new branch and cleanly copying in our newfound configurations.

Nice!

Problem was, nothing worked. The error we were getting was this:

cannot find variable specljs

If we rolled back to the old project, the one (and only) test ran, but in the new project, no love.

We copied *almost* every line over. We even downloaded a Sublime “file diff” package to double-check that our code was exactly the same as in the Friday project. We must have tried everything under the sun to get that code working.

In the end, Myles figured it out, but not without a lot of looking and poking (which, thankfully, helped soothe my bruised ego). The problem? We had created both our test file and our source file but the source file didn’t have a namespace.

Guess that’s TDD for ya, eh?! (Don’t create it until you need it.)

Instead, the “src” file had been blank all along and this is a no-no in Clojure.

Monday, 11:30

Now, we pushed the finalized configurations and starting to work on real code. But…really….this time it’s gonna happen. This time we’re writing real code.

First up…

We’re using Domina and Hiccups (although we’d really like to use just raw HTML), so there was a lot of trial and error getting all of that configured in the project.clj, source files and spec files.

It was slow but we made it to the end of getting the dependencies in order.

Ah, the smell of progress is sweet!

Next up, our first test. More fiddling.

How do you write Hiccups again? Does the string go inside the bracket or outside?

One passing test and one *almost* passing test and then we get stuck on this piece of code:

(dom/set-html! (css/sel “body”) (h/html (html/text-field {:id “testfield”} “testfield”))))

The error we get is this:

Can’t find variable: hiccups

Hmmmm…my first thought is that there’s a dependency issue.

The “require” looks like this:

(:require-macros [hiccups.core :as h])

I’ve written Hiccups code before. It isn’t that hard. Oh, but, this time, it is.

We have a well-written company internal application that we are modeling to help us along the auto-complete-clojurescript path. We got our Hiccups code from there. It’s copied and pasted and I didn’t fully read the line (this is foreshadowing).

Sidenote:  When am I going to learn not to copy and paste code? Didn’t I just write a blog post about this?!

Still not realizing what the error is, we get Colin Jones’ on Teamviewer/Google Hangout to help.  He immediately sees it. “What’s html/text-field?”

Hmmm. What is html/text-field? I don’t know because I copied and pasted it from Colin’s code!! {sad face}

3:30pm

We proceed to the next test and write this code to make the test pass:

(defn populate-dropdown [items]
  (doseq [text items]
    (let [item (dom/single-node (h/html [:li text]))]
      (dom/append! (dom/by-id “auto-complete”) item))))

The error we get is something quite familiar to us by now. We’re back to the stinkin’ Hiccups error:

Can’t find variable: hiccups

I immediately think “dependencies” again..but then we realize the previous test is using Hiccups?? So why are we getting a Hiccups error? I try a few more things then start banging my head on the desk.

We try *everything* (although obviously not) and wasted 1.5 hours on this “experience”. It has completely worn us out and collapsed our spirits. {sad face}

In the end, Ben Trevor saved the day explaining that you also have to call “hiccup.runtime” like so:
(:require [hiccups.runtime])

Never would have known that.

Reflection

Coming home after work last night tonight, I was explaining to my husband (also an apprentice) what a tough day I had moving from one problem to the next.

Three problems….that’s what we got done today…working through three problems and about 25 lines of code.

He listened intently to the whole story and then asked this question, “Why didn’t you ask for help sooner?” And then I paused…thought about it and said “I don’t know.” But with spelling things out in this blog post, I think I do know.

Every time we encountered a problem today, it took someone else’s expertise to solve the issue. And, even though I know I would have likely spent days on those issues were it not for that particular person’s expertise, I felt bad asking for help. Why??? What conjures this guilty feeling and how can I shut it off?!

Reflecting on this I realize, underneath it all, it’s probably an ego thing. I wrote a post about this a couple of months ago…putting your ego in your pocket…but, obviously, it’s not that easy.

Asking for help pushes me out of my comfort zone.

Hopefully the resistance to do this lessens over time.

Note to self: make this the desktop wallpaper on your Mac.

25 11 / 2013

Although I would love for this to be a post about decorating a room with patterns, it’s not. So, if by some dumb “luck”, you wound up here looking for that type of content, sorry!

I’ve learned quite a bit about the decorator pattern in the past few weeks, so here’s a blog post dedicated to just that subject.

What books will tell you about the decorator pattern is that it’s a “wrapper” pattern. It’s a class (Always a class? I presume so.) that wraps functionality (i.e., “behavior”) around another class.

Now, when will you use it? Most textbook examples show you cases where you have to add a lot of attributes to get a final product. In the Head First book on Design Patterns, the example they used was building a cup of coffee (from “Starbuzz”). The coffee is its own class and then each available ingredient in the coffee is another class - whipped topping, skim milk, mocha…

So, to build a cup of coffee, you chain ingredient classes together (plus the coffee class) to form an order…
One customer orders a coffee with mocha. Chain together mocha and the coffee classes.
One customer orders a coffee with skim milk and mocha. Chain together the skim milk, mocha and coffee classes.

Part of the reason you’re chaining these methods together in the coffee example is to accumulate ingredients.

Conceptually, that may be hard to grasp. So, specifically, let’s discuss something you’d want to accumulate.

Price would be one example. Weight would be another (say your building a machine with multiple parts that will have to be shipped and you need to know the final weight). Another example would be a product description…say, for instance, you are building a computer with customized accessories and you want a variable to hold the final description of the computer - “Dell computer with 4gb of memory and a 22-inch monitor” versus “Dell computer with 8gb of memory and a 27-inch monitor”.

When I first read the text on the pattern, I was a slightly confused by the code. It went something like this:
SkimMilk.new(Mocha.new(Coffee.new())).price

Notice that with each initialization of a class, we are passing in another parameter. This means that each class takes a “coffee” object as a parameter.

So, back to accumulating totals…to accumulate a total price for a coffee, we give each of the classes a method of “price”. The coffee’s price method, in this example, is a fixed price - let’s say $1.00. Then for each “attribute”, we take the running total and add the price of the attribute to it, like this: $.20 += price.

Since I found the code a bit hard to follow, it was interesting to write my own example and see it working. It’s pretty cool how it works actually. It’s something I would have never really thought of building without studying design patterns, but was pretty enlightening after I tried it for myself. You can see my example here.

In the past, I’ve heard my husband mention “decorators” a few times, so, once I got the logic under my belt, we talked about where he had seen decorators being used. His example was wrapping an existing class to add additional functionality. More specifically, since his company uses Spree as a codebase, they will wrap a class (for example, the “Checkout” class) with a separate class to force the existing class to perform additional functionality that it wasn’t previously doing.

Now, the non-apprentice Kelly probably would have been confused by this concept. Why not just change the existing checkout class? It’s open source. You have access to it.

What I’ve learned since I’ve become an apprentice is that it’s just not in your best interest to do so. It’s often called “monkey patching” and most craftsmen consider it to be ugly. A good example of why this is ugly would be that new members of the team might not realize the new functionality exists. They may have had previous experience with the original code in “Checkout” and now that they are on your team, they have to go dig through the class trying to understand what functionality you’ve added. If, instead, you wrapped the additional functionality in a decorator class, the intent is much more clear. The new team member only has to read the class with the additional functionality to get up to speed.

Another place where decorators would prove useful in working with a pre-existing codebase is when upgrading the codebase. In working with Spree, the codebase is maintained by an outside group of developers. So when those developers upgrade the codebase and you’ve monkey-patched a previous version, it is going to be extremely painful to upgrade and those upgrades could be critical to the security of your system (Ask me how I know.). In order to upgrade, you’d likely have to go through all of the new changes (and there could be LOTS) to figure out how to integrate your now-custom codebase with the new code, whereas, if you’ve utilized the decorator pattern, although it’s still going to be a painful process to upgrade, seeing your custom code in individual classes allows you to handle it more easily.

Studying the decorator pattern allowed me to reflect on a conversation I had from my previous career when we were contemplating switching platforms. We had contemplated switching shopping cart platforms several times throughout the course of our business.  At the time, the platform that often came up in discussion was Magento. I would often ask why developers talked so highly of it and one answer was common…Magento had “layers”: one layer for the core Magento codebase and one layer for each merchant’s custom functionality. It was explained that this was the way to go since it created a clean codebase that could be continually upgraded with much less pain. I can’t help but wonder now if these “layers” really were just a glorified use of the decorator pattern?

One last thing I’ve learned about the decorator pattern was a conversation I had with Doug just a few days ago. He had mentioned the same thought once before but I just didn’t get it - “Many developers will wrap an existing class with new functions and mis-label it as ‘a decorator’”.  But they are adding new functionality. Why is it not the decorator pattern?

This past Friday, we had more time to go over an example and he was able to clear up the confusion. If you are, in fact, changing the behavior of a class, you are definitely using the decorator pattern. But if you are simply adding new functionality, it may not always the case.

The difference here is whether or not new methods are public or private. If the new methods are private, they are being integrated into public methods and, thus, the functionality of that public interface is changing. If you are simply adding public methods to a class, you are, in essence, simply changing the interface, not really changing existing behavior.

24 11 / 2013

I got to pair this week with Nhu Nguyen on client work. The task…add ClojureScript to a client Clojure project.

It ended up being a super-interesting experience that spanned a couple of days - only because we thought we had things working until we went to run a test a day later.

When I worked on the Dojo-Dashboard project, my team had already wired Clojurescript into the project before I came on board. Looking back now, there were so many holes in my thinking because I didn’t have this configuration knowledge.

Often times, I’ve now come to realize, when things are working, I won’t take the time to go back and read said “working things”. Had I done so, though, things would have made a lot more sense.

So, first things first…in our main “src/” and “spec/” folders we needed to add “clj/” and “cljs/” folders. This is how we did it with Dojo-Dashboard and things seemed to work well then, so we just went with the same idea. We moved all of the existing code into “clj/” and created a small test method in the “cljs/” file to see if we could get it to compile and run.

The original task was to write an auto-complete function that would try to guess what you were typing as you input each letter (similar to how Google works). So we created an auto_complete.cljs file and added this code:

(ns client-name.auto-complete) (.write js/document “Hello, ClojureScript!”)

From there, we added a script tag to one of the existing pages that would execute the compiled code and hopefully display “Hello, ClojureScript!” on the page.

But first, we had to get the code to compile.  And that, folks, is where the real learning began.

One thing I love about ClojureScript, compared to Rails, is that all of the configurations are in one place (so far, anyways). And that file is in the root project directory - called project.clj.  Everything feels so tiny and succinct compare to Rails and that has really started to make me fall in love with Clojure.

On top of this, everything in the project.clj file is just one big hash of configurations and that makes it so easy to read.  Here’s what the original project.clj file looked like before ClojureScript configurations were added:

And here’s what the file looked like after:

Note:  I re-created this before & after based on memory, so please don’t rely on these two files 100% to set up your ClojureScript.

Things I learned from setting ClojureScript up are:

  1. It’s pretty darned easy. That was amazing. I thought it was going to be so much harder than it really was.
  2. You need to add ClojureScript (“cljs”) to the “dependencies” and “specljs” in the “dev:dependencies” for testing the cljs.
  3. Add an alias to run all test for both Clojure and ClojureScript. (Thanks to Taka for figuring this out on the Dojo-Dashboard project.)
  4. Now we have two directories of source code and two directories of test code, so we need to specify both directories with “source-paths” and “test-paths”.
  5. All the cljs configurations go in their own key (see my code at the bottom of the second Gist).
  6. We set the compiler on production with a key of “advanced” so that all of the compiled code will be compiled into the same file.
  7. What happens when cljs is compiled is all right here…it’s output to the “output-to” value.

That put so many different puzzle pieces together for me, it was pretty awesome. There was definitely a lot of ooooo’s and I-see-now’s going on.

And, although we got our “Hello, Clojurescript!” code to display correctly, we still had trouble with the tests when we went to write our first one the next day. The testing and compiling kept complaining about a “bin” file and the running the tests would just turn into a hung process.

Micah ended up helping us getting tests working and here’s what I learned from that experience:

I definitely didn’t understand every line I copied into the project.clj file (not good). When I got the :cljsbuild settings, my eyes kinda glazed over on some of it. We ended up copying and pasting code we didn’t need. This line specifically:

:cljsbuild ~(let [run-specs [“phantomjs” “bin/specljs_runner.js” ”resources/public/javascript/client_name_dev.js”]]

The code now reads like this:

:cljsbuild ~(let [run-specs [“bin/specljs” “resources/compiled-assets/client_name_dev.js”]]

For the Dojo-Dashboard project we had used phantomjs, but we weren’t for this project. Also, there was no “bin/specljs_runner.js” file in this project. (Strike 2)  Those are things I definitely should have looked at more carefully.

Another thing Micah did was upgrade all of the versions of specl, specljs, ClojureScript, etc. Versioning is something my eyes often hurry past when reading gemspec files and the like, so that definitely made me more aware of the fact…check versioning.

After working with Micah, I realized there’s still a couple of the configurations I don’t understand…namely the line with the “bin/specljs_runner.js” in it. In reading the “bin” file, I still can’t really make heads or tails of it. This is what it looks like now:

Hopefully, I can ask Colin or Micah about it on Friday and understand more of what it’s doing.  I think it has something to do with phantomjs.  Maybe it’s time to dive into phantomjs and see what that’s really all about.

24 11 / 2013

This past Friday I got the opportunity to pair with Doug Bradbury during Waza. We started off down the path of playing with the Raspberry Pi project he had been working on for the past few months, but we were quickly side-tracked by a completely different task.

For one of Doug’s side projects, he maintains a Spree site for a few of his family members and the site had a production issue. So we jumped in to see what was happening.

The side story to this project is that, just a few days ago, Doug had upgraded the site form Spree 1.3 to 2.0 (2.1?) and, while in the process of upgrading, he went ahead and upgraded Ruby from 1.9.3 to 2.0. This, he later admitted, was a decision he regretted. Upgrading both libraries at once was risky. Although we didn’t talk about the why, my thought was this: if something is wrong, how do you know which library is failing? Seems like a smart idea to upgrade one library, give it some time and then upgrade the next. On top of this, I don’t think there was much for Doug to gain upgrading to Ruby 2.0. So it was better to be patient.

So what is Spree? Spree is a free Rails shopping cart that e-commerce business owners can use to build their websites (mostly small businesses I would guess). Shopify has also built a business selling eCommerce out-of-the-box shopping carts using Spree’s platform as well, although, today, I would think Shopify’s own platform has diverged quite a bit from Spree’s.

Spree looks to be an awesome tool right out of the gate from what I can tell. There are tests (nice!), the code seems well-named and decoupled at first glance. I have some experience with eCommerce platforms as I built my (now-sold) business on a PHP open-source shopping cart platform called osCommerce. Sadly, at the time I made that unknowingly-HUGE decision to use osCommerce, we weren’t aware of other platforms out there. Had I known about Spree at the time….*sigh* I can’t help but wonder what my life would look like now.

Oh well, another topic for a different day.

On another note, I was interested to work with Spree, because my husband Mike, also an apprentice, works for a company that built their site using Spree as well.

First things first, the issue at hand was an XMLRPC error. We received the error message from Heroku when a customer wasn’t able to create an account. Right away, Doug knew it had something to do with MailChimp integration.

So here’s how it goes…when a customer creates an account on the site, they are automatically added to MailChimp through a Ruby gem called “Spree-Mail-Chimp” which uses another gem called “Hominid”. The bug was related to code inside of Hominid, where XML-RPC was being used to retrieve and send data to Mailchipm (vs. API calls). After some searching, we quickly realized this was a Ruby 2.0 issue and the bug hasn’t been resolved yet, but Doug’s larger question was “Why? Why is Hominid using XML-RPC? Why not just use the API?”

So…more searching…Are there any gems that use Mailchimp’s API? One, possibly. But lots more that didn’t use the API. Next, we Googled for the MailChimp API and discovered it was pretty new functionality. So this made sense why the other gems used the XML-RPC functionality. So, on to our next task, ditching Hominid and updating the Spree-Mail-Chimp gem we were using to work with the API.

What sounded like a daunting task, ended up being pretty easy. And then, boom, the site was back in business. Here’s our commit to the gem that now uses the Mailchimp API.

It was interesting to go through the process with Doug. We used the API documentation quite a bit and, sometimes I can’t help but get overwhelmed by all the documentation. It can be confusing and unclear at times and I find myself getting frustrated. So it was nice to see Doug try to decipher this as well…it made be realize that sometimes it really is just a struggle and you have to try to read it in chunks, write some code, and read some more. Rinse and repeat.

In the end, it was pretty satisfying to do the work. Mailchimp is pretty popular amongst the same group of people that use Spree. In our research it seemed business owners had been complaining quite a bit about the XML-RPC error. So, it was nice that we could spend just an hour or two writing code that could help so many people - people, who maybe can’t afford the help or just have no clue how to fix the problem.

Lastly, in testing the customer account creation, we found a separate issue with a missing entry to Spree’s YML file. This was a bit surprising, since it was Spree’s own code, but Doug said it was an easy fix. Surprisingly, we ended up making the fix twice. Once in Spree’s core code, which we then submitted a pull request for (so fun!), but also in the aforementioned site’s custom codebase so that the fix would be there for the time being while the pull request was pending. I was surprised that Doug acted so quickly to submit the pull request. For now, things like that would take me so much longer, so it’s probably something I might have put on my “to do” list and made note of. But, it really ended up being an additional 5 minutes of work for Doug. It definitely inspired me to make more pull requests.

After all was said and done, I asked Doug about tests for the site. Did he have tests that ensured that a customer could create an account? His answer was “No”. There were Spree’s own tests and there was his own unit tests that he wrote when adding new features and functionality, but there were no integration tests that tested the existing functionality. He did admit that it would be nice to have some Cucumber integration tests that made sure the pages were rendering with the correct data, but he hadn’t had time to implement those yet.

Later, I asked my husband this same question…if his team had integration tests surrounding their Spree platform. He said he understood Doug’s predicament. It’s not an easy task to go and add integration tests for Spree, but until you do, you have no confidence that everything is going right. He said at one point in his software’s life cycle, the codebase around Spree had grown so large that they had to stop what they were doing and add Cucumber integration tests around the core site to gain more confidence in the system.

In the end, I learned so much. Not bad, for a Waza afternoon!

24 11 / 2013

Today I took some time to play around with Bash, trying to get more comfortable with it. As part of this, I wanted to install a tool I had previously found called BashFinder, which I’ve attempted to install before, but could never successfully get it to work.

Part of the problem with installing BashFinder is that it requires two other installations that would lead me down other long and winding roads and I ended up having too many components that I was unsure about when things wouldn’t work.  Which part isn’t working?  What are these other components even for?!

Today, I patiently read through everything, taking my time, and, finally, victory was mine.

Starting off, BashFinder is a pretty cool tool for people like me that still can’t break themselves of using the Mac Finder. Sometimes it’s just easier. I’m sure a year from now, I probably won’t use it at all, however, for the time being, I do, so BashFinder it is.

What BashFinder does is allow you to set the “pwd” (otherwise known as the active directory) in Terminal to the directory you have open in Finder.  This happens automatically.  As I’m “cd’ing” into various folders in Terminal, my Finder window opens as well.

This also works vice versa…your active directory in Terminal can be opened in Finder simply by typing the command “cdff”. Pretty neat!  And highly useful (for me at least!).

Unfortunately, there are two other tools needed for getting BashFinder to work:

  • Git Completion
  • Bash Completion

Both of these tools, once I got them figured out, ended up being huge bonuses in themselves. Git Completion (finally finally finally) allows me to use the tab key to auto-complete my branch names (woooohoooo!!).  Bash Completion seems to do a ton of stuff (or so I’ve read after lots of research). For now, the most helpful thing it does is remind me of the names of commands (like ‘grep’ and ‘ack’). I’m pretty new to using these commands and often forget how to go about using them.  So this is where Bash Completion enters the picture.

Now, let’s talk about Bash…

When I first started at 8th Light as an apprentice, Josh took the time to sit down with me and set up my Terminal to be more user-friendly. He installed his “dot files” on my computer and I was completely lost as to all that was going on. Fast forward to today, though, and thanks to a few other craftsmen (like Eric Meyer), I have slowly absorbed more and more and understand so much more now.  I think simply installing tools like BashFinder teach you soooo much.  It can be a frustrating experience, but just having to fix things on your own and troubleshoot your way through a problem teaches you so much,

What I understand now is that there are a host of configuration files on your computer under the home directory (“~/”) for Bash. These might not exist when you first get started (they are not all necessary), but they are easy to set up and there can be A LOT of them (which I found a bit  confusing in the beginning).

Back to BashFinder…

When installing the tool, I had to go through the various files quite a few times making sure all of my settings were configured how I wanted them to be (since, on a sad note, the BashFinder code includes quite a few configurations that aren’t necessary for the tool to work…this ended up screwing up a lot of my other settings in the beginning, so I had quite a bit of deleting to do).

A few configuration files I learned about today…

  • .inputrc - deals with the mapping of the keyboard for certain situations.  This is the book definition anyways.  An example of a command in .inputrc is this: 
  • .bash_macosx - where all of the BashFinder code lives; a Google search of this only turns of the Github page for BashFinder, so I’m guessing this isn’t a standard file. This file’s code is called by another file (see below), so it was probably just simply extraction by the author.
  • .bash_aliases - a file typically used for specifying bash shortcuts (for example, “alias cls=clear”).
  • .bash_profile - where the meat is…this file has “if” statements similar to this one:

So .bash_macosx and .bash_aliases both have their own “if” statements included in .bash_profile. That’s how they get loaded each time Bash is loaded. The only file that doesn’t get loaded this way is .inputrc.

The reason for this is because .inputrc is a start-up file used by ReadLine (the input library used by Bash).  Ah, another book defintion.  The important point to note here is that since it’s a start-up file, it gets loaded automatically when Bash is loaded.

Up next, .bash_profile in depth…

This brings me to my next point…when making changes to .bash_profile, in order to get the changes to show up right away, use this command:

source ~/.bash_profile

This reloads the .bash_profile file so you can see your changes instantly.  (I only used this about a million times while trying to switch things around today.)

Now, in circling back to my original point, I now fully understand “dot files”.  They are simply a bunch of files that start with a “.” (dot) and configure your environment.  And, since everything is so configurable in Mac OSX, it makes sense that so many developers would have their own.  Sidenote:  Quite a few of the craftsmen at 8th Light save their “dot files” to Git which makes them easier to access and keep backed up.

I also have been impressed, that in reading them, 8th Lighters’ dot files are well commented and broken-up into well-named functions (what else would you expect I guess??).

In reading back through Josh Cheek’s .bash_profile, I had an urge to change the color of my terminal prompt as I’ve been using the same one for almost a year. Scrolling through the file, I found this code from Josh where the color magic happens:

I could see what he was trying doing here. The “PS1=” line is what sets the prompt.  But the line itself is pretty messy. I played around with it for a good 10 minutes getting nowhere, not understanding which part was the color setting. So backing up a bit, I started researching how to set colors. From there, I created a list of variables for each color I would ever possibly want to use. I also learned that there was a specific set of numbers for foreground colors and another set for background colors. In the end, this is what I came up with:

So much easier to read if I do say so myself!

30 9 / 2013

Paul gave me the task of building a web application that integrates with my Java Server. Main requirement: it must make fun of a craftsman.  So my app picks on Brian Pratt. It’s titled “Brian the Cranky Tweeter”.

As for functionality, the app contains a repository of Brian Pratt tweets (as well as a few other craftsmen tweets) and the user has to decide if a specific tweet is Brian’s or not.

Now in order to get my app to work with my Java Server, Robert Looby, another apprentice here, gave me a couple of hints about making a configurable router that could “plug in” to my Java Server (more on that in a later post). To do this, I would be importing one .jar file into another .jar file.

But, first things first, how do I get two .jar files to talk to each other?  Michael Baker once suggested that when you’re doing something new for the first time, start with a blank project.  So that’s what I did.  I created two projects - MainProject and SecondaryProject.  And here’s how to I got them talking to each other…

Below is what my Secondary Project looks like.  I created two class methods…a main method and a method that returns the day of the week.  Note, I used static methods, but you could use instance methods just as easily.

When I run the main method, I get a printed message “today is Mon”.

Next, I create a .jar file from the SecondaryProject.  One trick here is not to specify a Main class.  Just leave that part blank.  And, technically, you don’t need the main method in this project.  I just created it to test that my displayDayOfTheWeek() method worked above.

Once the .jar file is created, I moved on to MainProject.  First step is to create a main method.  Pretty simple.

Next, I imported the SecondaryProject .jar by going to “View” > “Open Module Settings” and the clicking on “Libraries”.  (If you can’t see the “Open Module Settings” option on the “View” menu, it’s probably because you have to first select the top level of your project in the left pane.

Once you add the library, it will show up in the left pane under “External Libraries”.

Once the .jar is linked as a library, you can simply call the “Response” class (like below).

Once I write the code, executing the main method prints “Mon” to the screen.

And that’s it!  Pretty simple stuff.

29 9 / 2013

So back in July, I learned how to do this neat function called getResourceAsStream in Java. It allowed me to configure a few templates (.html files) that my Java server needed (a file directory template, a 404 template, a form template), include them with my .jar and then copy them to a subfolder on the user’s system and use them once the server was launched.

Problem was, this code was a bit of a headache. In my original code I had to hardcode the file names into my server in order to grab them. My original idea was to include a directory called “templates” in the .jar file and then write code that would iterate over each file in the directory, copying each one to a temporary directory.  So I started searching for a sample solution.

And, as luck would have it, there are probably a good 20-ish posts on Stack Overflow where other developers have wanted to do the same thing.  The answer though?

Can’t be done.

Why??  From what I’ve gathered, it has to do with the fact that the templates folder is not truly a “file” when the .jar is loaded, so to grab the directory as if it were a file and iterate over the contents won’t work.

I didn’t buy it. Seriously? It can’t be done? I find this hard to believe. So I kept searching and searching and searching. At one point, after too many hours of searching, I gave up and that’s when I decided to just hardcode in the filenames. Viola. Not pretty, but problem solved.

Fast forward to this week when I was building my “Brian is an Cranky Tweeter” application (a story for another day) that sits on top of my server. In brainstorming how the application was going to work, one of my earliest thoughts was “Well, I’ll definitely needs some views. I’ll need a question page and an answer page and I’ll put these in a”…{insert sound of brakes screeching}

Oh no. This was the point where I realized I was going to have to hard code MORE file names for my getResourceAsStream functionality. Really??

My opinion on the matter was this:  it’s one thing to hack some code to get it to work, but now I was going to have to hack more code on top of the existing hacked code and that just didn’t seem right.  Enough was enough.

In all of my prior researching to using getResourceAsStream on a directory, there was one “solution” I had stumbled on a couple of different times…use a zip file. But, at the time, this seemed hard and overkill.

So what would any good desperate apprentice developer do?

Start googling “zip file getResourceAsStream”.

And in the end?

It wasn’t even hard! It probably took an additional 5 lines of code and an hour or two of concentration.

So, if you ever need to use getResourceAsStream on a directory, save your breath and your fingers and just use a zip file!

29 9 / 2013

A few weeks ago, Mike Jansen, one of the craftsman at 8th Light coordinated an 8LU talk on the Gilded Rose kata. It was a simple exercise. We worked in pairs trying to decipher a huge nested “if” statement of legacy code that had no tests.

Before we began, Mike talked about why he loved the kata. I think his explanation went something like this: often times we find ourselves in a situation where we need to add additional functionality to an already-existing code base that has no tests. Stakeholders *know* the code works and rely on it (sometimes heavily), but it’s old, people are afraid to mess with it, no one truly understands what is going on beneath the hood.

I’ve found myself in this situation more than a few times in my career but always on the business side of things. It’s a frustrating experience. It leaves you feeling helpless that you can’t get more from the functionality, anxious because if it did stop working things could turn impossible and, in my case, bitter because developers never seemed to really grasp how important the functionality was. Often times I would think, “If they could see things from my point of view, they would be working on that code tomorrow.” Instead, often times, I got a lot of shrugging shoulders, “dunnos”, and “We should just throw it away and re-write it from scratch.” ARGH!

So, that said, I love the Gilded Rose kata. I’ve done it four times, start to finish and each time I learn better ways to refactor “if” statements. And it feel EMPOWERING to get better at the kata. Each time I do it, I reflect on all of the “mystery box” applications I’ve had to rely on in the past - code that was old, only did half of the job, and was abhorred by everyone.

The kata was the first kata I ever attempted when I came to 8th Light. I was new to testing, new to attr_accessors/attr_readers, and new to reading code that was split between multiple files/classes. So, for my first attempt, I tried what little I knew and then ended up downloading three solutions from Github and re-engineering others’ code.

Needless to say, the Gilded Rose kata is a bit nostalgic for me being the first thing I ever did. When I worked on it during the recent 8LU, I paired with a friend of mine, Dave Kaplan, who just finished Starter League and is probably about where I was when I completed it my first time.

What an eye-opening experience! I knew exactly what I was doing this time around, which really spoke to me….”You’ve come a looooooooooooooooong way.”

Dave and I didn’t get very far during 8LU. In the 45 minutes we worked, we were able to surround the code with lots of tests that supported the README requirements, but didn’t have much time to do anything else. So, this past week, I circled back to the kata in an effort to finish it on my own.

One thing I found surprising was that after writing all of my tests, I used the SimpleCov spec to make sure my tests were testing every line of code. If you’ve seen the Gilded Rose kata before, you’ll probably remember that the code is quite a bit of a mess. So even after writing all of my tests, I wasn’t 100% confident I was testing every line. Sure enough, SimpleCov showed me I was only testing 99% of the code and had one “else” statement at the very bottom of the codebase that wasn’t being covered. After going over everything for another 20 minutes, though, I couldn’t figure out why the “else” wasn’t being executed by any of my test and the complexities of the nested “if” statements were wearing on me, so I gave up, chalking it up to “I bet it’s an ‘else’ statment that never gets executed.”

So wrong.

From there, I started refactoring the code going throug every “if/else” statement carefully, dissecting various pieces as I went and, sure enough, I found why the code wasn’t being executed. I had, in fact, missed a test because the README had failed to mention one bit of functionality that the code was doing. Huh.

Next I went back through all of my previous attempts at the kata over the past year as well as the 3 solutions I downloaded from Github and guess what? In all but one of them, the solutions were missing the hidden requirement. The thought occurred to me then…Mike Jansen has mentioned this during the exercise. Often times, we *think* we know what this legacy code is really doing, but do we really?? In this case, the README detailed all of the supposed requirements of the kata, but, in reality, there was more functionality going on underneath the hood.

I think that’s one reason it’s so important we dissect legacy code ever so carefully, otherwise, we’re bound to miss something. And before we know it, we will have a nice piece of beautiful, stinking garbage that doesn’t operate in the same way the legacy code did. And no one realizes it until the code ends up in production and everyone is shrugging their shoulders and pointing fingers.

The first time I did this kata, I wrote tests relying on the README and attempted to decipher the “if” statement as I went. But somewhere along the way, the “if” statment’s complexity starts to degrade (or maybe there were just parts of it that were the opposite - too complex?). So as I felt more confident with how the whole thing was working, I would skip steps, deleting the rest of the legacy code and finishing based on my own assumptions. I can only imagine that’s what the other two Github coders did as well.

So if this was indeed an intentional missed README requirement, I consider it pretty brilliant. It definitely separates solution-goers into two camps - those who relied on the README only or those who relied on the README and the code.

So want to know what the missing README requirement was? Spoiler ahead….






























The missing requirement that I found was that Aged Brie increases in quality twice as fast once the expiration data has passed. Per the README:
- Once the sell by date has passed, Quality degrades twice as fast.

The word “degrade” here caused me to assume that this requirement only related to items that had depreciating quality, so I excluded Aged Brie which actually increases in quality.

Of course, I could be wrong about the whole thing. If that’s the case, I’m interested.

If you’re up for the challenge, try it for yourself and let me know what you find.

17 9 / 2013

Does setting your clock 10 minutes fast really work?

I had a conversation with my husband on the way to work the other morning regarding the time. We were driving my car. It was a simple question he asked, “Are we going to be late?” (We take the train.)

My answer went like this:
"Well, my car clock is 7 minutes fast and the train leaves at 7:42. So since the clock says 7:30, it means we have 12 minutes plus an additional 7 minute buffer, so we have 19 minutes. Now if we get to the gas station up ahead with 10 minutes to spare, then typically I know we’ll be on time. So currently, we are 7-9 minutes ahead of schedule."

At this point, my husband did a doubletake and we both started laughing. “Wow! That’s a lot of math!” was his response.

Ha! It is a lot of math and ever since he said that I have been thinking more and more about setting my clock “fast”. We work so hard to make things as simple and understandable in code, so why do I want to mess with something as basic as time - making it so much harder than it needs to be.

I’ve set my clock fast for almost my entire adult life. Why? Does it really make me more on time? Do I feel more refreshed/relieved to have that “extra” 7 minutes of time? Is it really even extra?

I did some research tonight. Those in favor of setting clocks fast argue that you see the 7:42 time earlier and think, “Ohmygosh! My train is about to be here! I’m going to miss it!” but then you’re quickly reminded that the clock is 7 minutes ahead so you think, “Phew! Silly me! I’ve still got 7 minutes.”

Problem is that I never (ever ever) see 7:42 on the clock and think “It’s 7:42 and I’ve missed my train.” Instead, I’m always immediately subtracting 7 minutes. It’s like a lookup chart I’ve got going in my head at all times…

7:30 => 7:23
7:35 => 7:28
7:40 => 7:32

About the only way a clock set fast would work, in my opinion, is if you didn’t know how fast it was…if it was randomly changed each day. I’ve actually thought through this idea several times. Usually, in the morning I’ll get up, shower and get ready and then eat breakfast with my husband in the kitchen. While eating breakfast, my eye is on the kitchen oven clock (set to the correct time). Now, wouldn’t it be nice if, before coming into the kitchen, my husband set the clock 10 minutes fast but didn’t tell me? That way, when I got in my car and saw the “real” time I would think, “Oh wow, I’m 10 minutes early!” From that point on, though, I would know the kitchen clock was 10 minutes fast and would spend the extra 10 minutes sipping coffee in the kitchen. So, instead, we would have to agree that he change the clock every morning before I came in so I had no idea how fast the clock was and would just need to rely on the time that was displayed. Crazy idea, eh?

In summary I think, “Why make it so hard?” I’m convinced it doesn’t help much.

Tonight, I’m putting a moratorium on setting clocks fast.  Will let you know how it goes.