UrlEncoding the hard way

I like poking fun at code only because I hope to help highlight simple errors and make our programming world a better place ;) I am sure there is a trail of awful code littered behind my career, however, I am grateful for the fact that I do not have to look at it.

So the latest faux pas was seen in a piece of code that required a string to undergo URL encoding. This is a simple process that requires one to remove all the illegal characters from a URL. The following are considered illegal (reserved) characters:

! * ' ( ) ; : @ & = + $ , / ? % # [ ]

This led to the following code snippet...

String HttpPost = GetRawHttpString();
HttpPost = HttpPost.Replace("%","%25");
HttpPost = HttpPost.Replace("|", "%7C");
HttpPost = HttpPost.Replace("/", "%2F");

Now the basic concept is to identify the reserved characters and replace them with their percentage encoded equivalent. This is not wrong just a little off ... It is the kind of thing you do when you are, for example, an expert C programmer converting to the .NET Framework. It is really easy to fall into traps of how to solve the problem verses the idea of learning the more about .NET Framework. This is probably what I would have done:

String HttpPost = GetRawHttpPostString();

HttpPost = HttpUtility.UrlEncode(HttpPost);

I think the real reason why HttpUtility was not used was because this was not a Web application and in that scenario you are required to import System.Web.dll as a reference manually.

Technorati tags:


Comment Section

Comments are closed.