WP_20140405_001_edited

A couple of years after moving to the US I began work for a healthcare startup that was dealing with some serious reliability and deliverability issues. For any service provider inconsistency can destroy your chances of success, but in the literal life or death scenarios presented to us in patient care the dedication that is required and expected by your supporting staff is even more meaningful. You ask your team to make sacrifices that interfere with their families, friends, vacations and holidays. It amounts to real loses in your quality of life that may not be easy to recover.

As a lead developer in that kind of environment my job was two-fold ensure our operations continued to run as smoothly as possible by organizing, training and leading tier 3 development support. In parallel we were also tasked with providing improvements to the system that would in turn reduce our tier 3 burdens.

What we all  began to notice very early on is that the disrepair of our system began to encourage really bad habits that included a very high tolerance for risk taking in production. Now the  problems we were solving were serious, the ramifications of not resolving those problems would be disastrous. The methods for solving production problems bifurcated into two strategies, that included Lone Hero developer and the coordinated  development teams (heretofore referred to as the Avengers!)

Lone Heroes (Developer equivalent of Tony Stark)

Our team contained a lot of really smart people who had already cut their teeth in a variety of different jobs. Coming to this organization was, for many, the opportunity to put years of disciplined training to the test. What I noticed at first is that this took the form of high risk updates that mostly worked but would occasionally cause more problems. Even when it did not cause more problems it led to a entrenched tribal knowledge that team members either would not or could not formally pass to others.

Let me be clear no one ever thought this was purposeful, but the work atmosphere rewarded the solo effort in informal yet powerful ways, and so others would replicate the same behavior. Now most developers who tackle problems over the course of hours or even days will tell you there is a particular kind of rush and exhilaration you feel when you find a solution to a complex problem. Once you get the rush of success it is addictive, but it can easily become toxic especially if you are unable to unwind or release that energy. The levels of concentration one requires, reminds me of those cats chasing a laser pointer, if they see the moving red dot nothing will distract them from the a fruitless hunt, it becomes a self sustaining cycle.

Avengers (Coordinated & Talented Dev Teams)

Well functioning and mature developer teams have one thing in common. Process.

It can be informal or formal, but it is a contract that all members must willing adhere to, even if it means working at an abbreviated pace (and in the beginning it always does). The whole point is accountability and repeatability, these two pillars provide the atmosphere for the most junior of developers to respectfully challenge the work of the most senior developers and then expect fully reasoned answers.

Your particular process may be in the form of code reviews, peer programming, stand up meetings, war rooms, or business/functional documents. The lone heroes tend to buck against the process, this is normal, as they have gained significant success by working alone. While this attitude remains a burden assiduously avoided, it is not unexpected, and thus not beyond a measure of control.

There can be a real trick in convincing talented individual performers to work together in an unselfish way, but it can be accomplished with a strong and compelling vision of a future that overrides our need to be at the fore front of individual success. Thankfully, the managers and developers  I have worked with have always valued balanced and productive teams over super heroes.

September 18, 2014 3:43    Comments [0]
Tagged in Musings | Software

Share on Twitter, Facebook and LinkedIn


Giving away free things is always good, in fact I have witnessed people jump through some pretty bizarre hoops just for the slightest possibility of securing “free” swag. So you can imagine how  mystified I was with the backlash over Apple’s release of U2s new album, "Songs of Innocence", to all iTunes users.

This spawned a series of responses from the Apple faithful, here are a couple examples:

Apple lets users who hate free stuff ditch U2 album

As a part of its big press announcement last week, Apple announced that every iTunes user would get U2’s new album for free. While some people were thrilled to have some new music in their account for free, others…weren’t so happy. People who don’t care for the Irish rock group took to Twitter to complain about the appearance of “Songs of Innocence” in their music library. Now, Apple has offered those people who don’t care for the album a way out.

Apple, U2 and looking a gift horse in the mouth

But the inordinate amount of actual anger directed at Apple and U2 over this is so disproportional to the actual event, I’ve started to wonder about the mental state of some of those complaining. It’s really been off the charts.

If you fall into that camp, let me speak very plainly: I have no sympathy for you. I have trouble thinking of a more self-indulgent, “first world problem” than saying “I hate this free new album I’ve been given.”

So we went from being kind of weird for not wanting the album, to being mentally unstable. Ok …ok. I was in reality ambivalent about the album, the gesture and the general response but then I happened upon this response from Mr. Ed Bott:

Exactly! Apple has now provided a link for iTunes users to remove the advert album, it does remain free until October 13th if you are actually interested.

September 16, 2014 3:32    Comments [0]
Tagged in Music | Musings

Share on Twitter, Facebook and LinkedIn


I came across an interesting problem recently that reminded about why I both love and remain deeply suspicious of the .NET framework. A web server was retrieving images from some service and almost over night those images refused to render. For the sake of brevity (and security) I am missing out much of the architecture but essentially we were taking a valid Base64 encoded string, converting it into a byte array, then creating an image from that byte array (pretty standard stuff). I was able to replicate the portion of the code causing the issue and It seemed to stem from an innocent looking static method on the Image object called FromStream. Here is the code:

class Program
{
static void Main(string[] args)
{
byte[] backImageBytes = null;
try
{
string val = GetBase64String();
backImageBytes = Convert.FromBase64String(val);
using (MemoryStream m = new MemoryStream(backImageBytes))
using (Image backImage = Image.FromStream(m,false, false))
{}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
Console.ReadLine();
}
}

Here was the exception I was getting on the web Servers, and it was only occurring on the web server, I simply could not replicate it on my developer laptop any other machine for that matter.

Exception: System.ArgumentException
Message: Invalid parameter used.
Source: System.Drawing
    at System.Drawing.Image.FromStream(Stream stream, Boolean useEmbeddedColorManagement, Boolean validateImageData)
    at System.Drawing.Image.FromStream(Stream stream, Boolean useEmbeddedColorManagement)
    at System.Drawing.Image.FromStream(Stream stream)
    at ConsoleApp1.Main(string [] args) in C:\dev\ConsoleApp1\Main.cs:line 132

My immediate assumption was that this was some problem with an older version of the framework that the web server was using but I quickly realized that appropriate versions existed on all the servers and desktop machines I was testing . So at this point I wanted to see what the Image.FromStream() method was actually doing (image from Reflector).

Image.FromStream()

So I immediately see SafeNativeMethods.Gdip.GdipLoadImageFromStream() and roll my eyes, this is not going to end well (or here for that matter). So what is it? *It* is a DllImport, a platform invoke that allows managed code to call unmanaged functions from DLLs that *should* exist on your server.

[DllImport("gdiplus.dll", CharSet=CharSet.Unicode, SetLastError=true, ExactSpelling=true)]
internal static extern int GdipLoadImageFromStream(UnsafeNativeMethods.IStream stream, out IntPtr image);

So what I now appreciate is that the framework only provides a relatively thin layer of abstraction over a native Win32 assembly by the name of gdiplus.dll (GDI+) for image related manipulation. The above error is actually occurring at a level that it is somewhat more fundamental to Windows. The truth is that the server was probably a little older than it should be, and even a subtle change in the image type (say gif to tiff) would result in this kind of failure, and without a GDI+ update this error would have probably continued to occur.

So …what do I love about .NET framework? The consistency you are presented with no matter what server or machine you are running on. What am I suspicious about? The vast portions of “core” code that rely exclusively on unmanaged code that have nothing to do with the version of the .NET framework you have installed. You should take a look one layer down it may actually surprise you how many PInvoke calls you are currently making.


Related Posts

September 15, 2014 12:53    Comments [0]
Tagged in .NET | Windows

Share on Twitter, Facebook and LinkedIn


For those of you living, shall we say, a more sheltered existence the debate around network neutrality centers around a new ruling that would, for the first time, allow Internet Service Providers (ISPs) to charge tech companies (or anybody) to send data at premium speeds, creating a de jure, two tiered  system for all internet users (slow and fast lanes).

Unsurprisingly the current crop of tech giants (Amazon, Google, Facebook, etc.) are quite happy with the current neutral offering, which makes sense, they have reaped untold profits from a system that treats everyone the same. Every budding startup company can also compete on the exact same footing as entrenched multi billion dollar corporation (theoretically). This is the pure, egalitarian, brilliance of the of the internet, zeros and ones are free to fly around the globe unencumbered by their source or destination.

With a system this successful who would want to change it? and why? Well companies like Comcast, Verizon and AT&T for a start, they are the gate keepers and that position would provide way more leverage if they could charge people differing arbitrary premiums for faster service. They could then in theory favor one company, who pays more, over another and rather than a market driven capitalist system we would trend toward an oligopoly.

New FCC CTO

Who the Federal Communication Commission (FCC)  puts in charge of anything is generally not news worthy for me, but the appointment of Scott Jordan to the position of CTO has some particular relevance, especially in light of recent heated net debates. His views on net neutrality have been published for some months now through the University of California and  can be generously described as nuanced (I prefer “on the fence”). Here is an excerpt (emphasis mine):

We argue that neither the extreme pro nor con net neutrality positions are consistent with the philosophy of Internet architecture. Our view is that the net neutrality issue is the result of a fragmented communications policy unable to deal with technology convergence. We develop a net neutrality policy based on the layered structure of the Internet that gracefully accommodates convergence. Our framework distinguishes between discrimination in high barrier-to-entry network infrastructure and in low barrier-to-entry applications. The policy prohibits use of Internet infrastructure to produce an uneven playing field in Internet applications. In this manner, the policy restricts an Internet service provider's ability to discriminate in a manner that extracts oligopoly rents, while simultaneously ensuring that ISPs can use desirable forms of network management. We illustrate how this net neutrality policy can draw upon current communications law through draft statute language. We believe this approach is well grounded in both technology and policy, and that it illustrates a middle ground that may even be somewhat agreeable to the opposing forces on this issue.

Our proposed layered approach to defining nondiscrimination rules that removes the need to define either “managed services” or what constitutes the “Internet portion” of a provider's offerings. We propose that any QoS mechanisms that an ISP implements in network infrastructure layers should be available to application providers without unreasonable discrimination. Requiring such an open interface can ensure that ISPs are prohibited from refusing to provide enabling Internet infrastructure services to competing application providers in order to differentiate the ISP's own application offerings, prohibited from providing Internet infrastructure services to competing application providers at inflated prices in order to favor the ISP's own application offerings, and prohibited from making exclusive deals to provide enabling Internet infrastructure services to certain application providers. It can also ensure that ISPs have the right to apply network management mechanisms that do not threaten a level playing field, and to make arrangements with consumers, application providers, and peering ISPs for Internet infrastructure services in a manner that does not conflict with the above goals.

Let me summarize:-

  • Allow ISPs to define Quality of Service (QoS) tiers through “management mechanisms”.
  • ISPs should make QoS tiers quantifiable and sellable to all companies.
  • ISPs cannot provide preferential treatment to any application providers.

Think About The Future

Please do not be fooled this is not net neutrality, neither is it a middle ground compromise. At a minimum this skews the net in favor of the internet winners of today in a wildly disproportionate way and it will disrupt grass roots innovation that occurs everyday. The FCC actively encouraged us to make our voices heard and I sincerely hope the response that brought down the FCC comments system was enough of an indication of our collective intent to protect the net.

The truth is that the deck is stacked against us, the Chair of the FCC (appointed by President Obama) was a lobbyist for the same cable companies who are attempting to overthrow the neutral net, together with this new CTO I am afraid the odds of a positive outcome are not great. As always, I remain hopeful.

Related Links

September 2, 2014 15:48    Comments [0]
Tagged in Internet | Law | Network

Share on Twitter, Facebook and LinkedIn


Just last month I kicked the tires on the Bing Code Search tool and found it to be a solid way to access code samples from MSDN, StackOverflow, Dotnetperls and CSharp411. Now here we are just a few short weeks later and I am downloading a significant makeover of that tool appropriately renamed the Bing Developer Assistant. With marked improvements in IntelliSense support and a new one stop location for code snippets and samples, it definitely improves on the original.

For me the best new feature is the inclusion of  Offline search, this provides the ability to define local directories that contain your own sample code, enabling disconnected search capabilities. Conceptually I am hoping offline search would help the onboarding process for new developers in my team, I find they tend to reinvent the wheel (sometimes unnecessarily). These Offline search capabilities could be further refined by allowing users to explicitly force disconnected searches or by giving priority to local code snippets in the search results.

CodeSnippetRepo

Download links are here, check it out!

August 26, 2014 23:02    Comments [0]
Tagged in Tools | Visual Studio

Share on Twitter, Facebook and LinkedIn