After finally understanding what the .NET Standard is and why we need it, the next thing I wanted to do was investigate which of the many binaries, projects and packages I have used that will continue to work with the .NET standard? Thankfully there are a couple of tools that are going to make this relatively straightforward.

Visual Studio Plugin

The obvious place for this analysis is Visual Studio, more specifically the .NET Portability Analyzer plugin, you can install it from the gallery like this:

  • Tools->Extensions and Updates...
  • Online >Visual Studio Gallery
  • Type in .NET Portability Analyzer and download/install it.
  • Visual Studio will probably require a restart

Once installed you have access to two new context menu commands which you activate by right clicking your project:

  • Analyze Project Portability
  • Portability Analyzer Settings

Executing the Analyze Project Portability against your Visual Studio project will produce an excel sheet which will give you a score ranging from 0-100% compatibility.

Analyze Project Portability Excel Results

In my example I am not 100% compatible which means some portion of this project does not conform to .NET Standard, clicking on the Details tab give more specific reasons on what is failing.

Analyze Project Portability Details

Command Line Tool

If you are not inclined to use Visual Studio or you are more interested in collecting these results during your continuous or official builds you could also take advantage of the command line tool (it is the same engine to the VS plugin). To get this setup, do the following:

Now you can execute the command line script as follows:

C:\debug\ApiPort.exe analyze -f C:\test\Test.dll

As with the Visual Studio plugin this produces and excel sheet by default, however, this example is again only selecting defaults for comparison. For a complete list of targets you can run the following command, versions marked with an asterisk are the defaults:

C:\debug\ApiPort.exe listtargets

So if I want to change the target to say, an older version of the standard like 1.3, then I can run a command like this:

C:\debug\ApiPort.exe analyze -f C:\test\Test.dll -t ".NET Standard, Version-1.3"

This just makes integration with an existing build process much more straightforward, you could probably even wire this up to alarm if someone uses an API with limited supported.

Checking NuGet Packages

There is a very useful community project called I Can Has .NET Core, which allows you to upload your projects packages.config, project.json and paket.dependencies for analysis, the site then builds a visualization of the package and determines whether the equivalent .NET standard versions are available on nuget.org. As an added bonus you can point the site at your GitHub repo and it will automatically scour it for packages as well.

I Can Has dotnet core

Your dependencies are categorized as Supported, Known Replacement Available, Unsupported and Not Found.



July 6, 2017 3:54  Comments [0]
Tagged in .NET | Tools
Share on Twitter, Facebook and Google+

Over the last few years .NET developers have been given an opportunity to develop software targeting a genuinely diverse set of devices, operating systems and platforms (.NET Framework, Xamarin, .NET Core). This has been mostly a blessing but the subtle differences in each stack (sometimes not so subtle) has started to create underlying issues around portability. So if you created a handy Nuget package the question of which .NET developers could take advantage of it has become a complex question, and this is mostly due to the various implementations of Base Class Library (BCL).

As you know the BCL contains your primitive types and frankly each version of .NET was created by a multitude of different teams of developers and so namespaces and classes got slightly different implementation, again negatively impacting your chances of true cross platform development. So how do you unify the various BCLs?

.NET Standard

Why the .NET Standard?

The .NET Standard is a specification and it represents a set of APIs that all current and future .NET platforms have to implement.

  • The .NET Standard defines what is consistently available across each version of .NET
  • Cross platform developers can focus on mastering the standard rather than the platform

This may sound immediately familiar in that this was what Portable Class Libraries were supposed to accomplish, and to a certain extent it did. The problem here was that the PCL was an afterthought rather than a strict standard, and so each .NET platform team could decide whether or not they would implement an API, this inconsistent approach was immediately problematic for everyone outside of Microsoft.

Which .NET Standard to use?

Higher version of the .NET standard include the lower versions so if, for example, you build .NET Standard version 1.6 application you are compatible with version that include 1.3 and/or 1.0. To see exactly what API and namespaces you get to target check out the .NET API Browser. The APIs of .NET site also allows you to search for a particular class and see which standard and platforms are support.

.NET Standard Compat table

Generally speaking you should select the lowest version of the .NET Standard that you can accept, this will provide the broadest base of compatible platforms and user experiences.

Over the next few weeks I am going to checking out how this approach can assist me bringing an application like DasBlog up to speed.




June 29, 2017 5:00  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

The theme for Visual Studio during this year's Build conference appears to quite explicitly suggest that Microsoft is quite happy for developers to develop and build code wherever you feel comfortable, even if that is not Window 10. Considering the questions produced at Stack Overflow, we can generally infer that .NET Core (and to a lesser extent C#) are gaining in popularity.

Rather than rest on that success and the natural way the current toolset sits inside Window 10, Visual Studio Code led the charge as a genuine cross platform web tool. This idea of developing for any platform on any platform seems to be driving much of the current conversation.

.NET Standard 2.0

.NET Standard 2.0 & XAML Standard 1.0

What may not be obvious to developers is that there are actually different incompatible flavors of .NET, and the differences are sometimes subtle and only become obvious once you are in the weeds of cross platform development effort. This puts Microsoft into an interesting position of asking developers to leave useful and productive pieces of code behind in the name of forward progress.

The .NET Standard Library got a bump to 2.0, and this allows developers to actually learn one API that works across all .NET platforms. The original version of the .NET Standard was limited in that the shared API was fairly small and was implemented using Portable Class Libraries (PCLs), this in turn meant that there were still many libraries that simply would not fit smoothly into the standard.

The newest .NET Standard provides a specification for any platform to implement, so for example Tizen from Samsung could follow this standard and immediately be able to take advantage of a growing list of compatible libraries. Of course all .NET runtimes provided Microsoft will implement the stand (.NET Framework, .NET Core, Xamarin, etc.).

For me personally I am looking at DasBlog and the way we can smoothly use years of reliable code buried in .NET 2.0.

.NET Core and Visual Studio for Mac (RTM)

When I heard back in November that the full Visual Studio experience would be available on the Mac it really brought home that Microsoft's strategy for targeting developers is no longer exclusively focused on Windows. This RTM is not meant to be a limiting experience, they intend to provide parity across platforms, sharing large amounts of the original code base that provide project templating, backend debugging and the Azure publishing infrastructure. It is, in fact, using the same Roslyn Compiler Platform and MSBuild project system.

Amazing!

Xamarin Live Player (preview)

While Microsoft proudly proclaimed that Windows 10 is on 500 million devices, it would be also truthful to say that the aspirational target was missed by half. Many of us have long since understood that the only realistic target for mobile development has always been iOS and Android, however, only Android was could readily built on a Windows desktop devices.

Now with the introduction of Xamarin Live Player we are able to deploy directly to an iOS device and your code can be debugged and tested on your Windows 10 machine. This is good but understandably not complete, final builds and app submission will still need to occur on a Mac making this solution less than optimal for lone developers who hope to support the entire software cycle from Windows. Still this is much closer than we have been.

Live Player will also support Android devices.

Additional News



May 13, 2017 3:48  Comments [1]
Tagged in .NET | Visual Studio
Share on Twitter, Facebook and Google+

Over the course of the last year I have been tasked with analyzing our production environments, specifically looking at performances issues, hangs and crash analysis using the Debug Diagnostic Tool, Performance Monitor and Debugging Tools for Windows (WinDbg).

WinDbg is an ancient and primordial tool of the Windows ecosystem, it is one of the oldest native debuggers I am aware of. Its age means that it really does not know, in a direct way, what the more modern .NET is or even does. In order for WinDbg to be able to give meaningful information about the .NET framework and how objects are collected and released we need to load a couple of extensions.

Preparing WinDbg

  1. Open WinDbg as an Administrator.
  2. Hit CTRL-D and navigate to your hang dump to load it into WinDbg.
  3. Load the .NET 4 managed (as appropriate) code extension and SOS extension with the following commands:
    • .load psscor4
    • .loadby sos clr

After loading these extension you now have access to commands that will allow you to analyze the hang dump. Here are the basic commands I tend to use for high memory, high CPU/hangs, and app crashes.

WinDbg - High memory scenarios

eeheap

!eeheap –gc

eeheap will shows information on the memory heaps used by GC. It will display a heap info for each logical processor, so if you have hyper threading on a dual core machine you would see four heaps.

!dumpheap –stat

What objects are consuming the memory, that have not been collected, first column output is the method table which is an index to the type of object.

!dumpheap –mt methodtable

Dumps out a list of all objects of that type (based on method table, first column output is the address.

!do address

A short cut for !dumpobj and shows properties of the specific objects including the objects value.

du value

Converts the value into a readable output.

!gcroot address

This command detects which objects reference this address. Useful for tracking down what might have a reference to stubborn objects.

WinDbg - Hangs and Performance Issues

!threadpool

This command shows CPU Usage percentages, be careful using this on multi use boxes, CPU is a function of the is CPU if this is not a dedicated box.

!runaway

This extension display information about the time consumed by each thread. Very useful if you want to know if a specific thread is consuming way more time than other threads.

~* e !ClrStack

This command sequence is designed to show the .NET call stack for all threads

!syncblk

Tells us how many threads are waiting for a lock MonitorHeld. This can be important for threads that are blocked, it is important to remember that it only covers .NET locks.

!dumpheap –thinlock

Shows all the locks that have no conflicts.

WinDbg – Crash scenarios

!analyze –v

Display exception information with the verbose switch gives as much information as possible.

!dae

Dumps all the available exceptions.

A few other useful WinDbg commands

~ 13 s

Set the current context to the thread id of 13.

!ClrStack

Show the .NET stack for current thread context

!aspxpages

Dumps the HttpContexts found on thread and lists the URI in various states of request and response.

!DumpASPNETCache –stat

Gives a list of objects stored in your web cache

 

Having the tools and commands is one thing, understanding context is a whole other question, if you need help with that I would strongly recommend visiting the blog of Tess Fernandez. A few years back she produced a legendary series of detailed hang analysis articles that remain wholly relevant today.



April 6, 2017 2:17  Comments [0]
Tagged in .NET | Debugging
Share on Twitter, Facebook and Google+

When C# burst on to the scene at the turn of the century I was firmly embedded in the world of Visual Basic and C++, these two languages were fulfilling vastly different jobs for me. I was using C++ to create a high performance multi-threaded application, but with VB I was able to churn out high quality Windows Form applications. The promise of C#, however, was a language that would allow you to accomplish both. I was in Detroit for that first C# presentation and the way it is was presented reminded me of a famous Bruce Lee quote:



February 15, 2017 6:55  Comments [3]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I have been wanting to revisit some design patterns for a while, this has been more urgent recently as my posts have taken on a more philosophical leaning with very little or no code.

I talked about the Singleton pattern many moons ago but over the years I have changed my mind about the best approach multiple times, that along with significant CLR improvements have ensured that I a few new ways to do this. For example the first time I discussed this I never even considered static initializers! The way they work in .NET is that only one thread can be in a static initializer. This means that the .NET framework interprets your use of a static initializer as a queue to implement locking, mutex and volatile variable assignments. I would propose that in almost all use cases the CLR will do a better job than we will. So there is this:

public class SomeSingleton
{
    private static readonly SomeSingleton instance = new SomeSingleton();

    private SomeSingleton()
    {
    }

    public SomeSingleton Instance { get { return instance; } }

    public void SomeThreadSafeWorkToDo()
    {
    }
}

I did reject a similar option earlier because I actually wanted to do Lazy initialization, but what I did not know was that the simple introduction of an empty static constructor alters when the framework is allowed to initialize static variables in the class. In the following example the private static instance variable will only be initialized when it is first used (via the public property) and not when the class is declared.

public class SomeSingleton
{
    private static readonly SomeSingleton instance = new SomeSingleton();

    /// Ensures lazy construction
    static SomeSingleton() { }

    private SomeSingleton() { }

    public SomeSingleton Instance { get { return instance; } }

    public void SomeThreadSafeWorkToDo()
    {
    }
}

Let's ramp up the laziness to 11 just for a moment, because if you think about it calling any other static method of SomeSingleton could inadvertently force SomeSingleton to be prematurely constructed. To plunge down to this kind of lazy we should only be instantiated when you are actively using the public Instance property. So check this out:

public class SomeSingleton
{
    /// Ensures really lazy construction
    private static class SingletonContainer
    {
        internal static readonly SomeSingleton instance = new SomeSingleton();
        static SingletonContainer() { }
    }

    private SomeSingleton() { }

    public SomeSingleton Instance { get { return SingletonConstructor.instance; } }

    public void SomeThreadSafeWorkToDo()
    {
    }
}

Admittedly I have not seen this used very often but it can be helpful to take advantage of private classes nested within another class because the accessibility rules allows SingletonContainer to call the private constructor of SomeSingleton. To be sure this has ramped up the complexity and if you have no other static methods that could cause your singleton to be prematurely created feel free to ignore the above approach.

Lazy in .NET 4

I seemed to have spent much longer on Lazy initialization rather than the singleton pattern, but I thought it important to finally note that .NET now explicitly supports Lazy loading and so this actually becomes are more succinct rewrite. Unlike the previous nested class example it expresses much more directly what I am trying to accomplish.

public class SomeSingleton
{
    private static readonly Lazy<SomeSingleton> lazy = 
                new Lazy<SomeSingleton>(() => new SomeSingleton(), true);

    private SomeSingleton() { }

    public SomeSingleton Instance { get { return lazy.Value; } }

    public void SomeThreadSafeWorkToDo()
    {
    }
}

I included prior .NET option because I am constantly switching between .NET versions at work, and so I assume others are too. To be clear though this last option is the one I lean in to since the release of .NET 4.



November 3, 2016 3:18  Comments [0]
Tagged in .NET | C# | Design Pattern
Share on Twitter, Facebook and Google+

Supporting an application with installs across multiple versions of the .NET Framework becomes challenging especially when you have to provide fixes and upgrades across those versions. Occasionally you end up creating multiple answers to the same problems simply because features may not have been released with the version of the .NET Framework you are building against.

As this initial research is always one of my first steps I thought I would just create a post that aligns the feature releases, Visual Studio, CLR with with each of the the .NET versions. A simple look up table so to speak.

So if you need to know which version of MVC was released with .NET 4, or you are just curious about when the CLR supported extension methods this table should help.

.NET Framework

CLR

Main features

Visual Studio

MVC

Entity Framework

Web API

Signal R

1.0

1.0

- Visual Studio .NET

-

-

-

-

1.1

1.1

  • ASP.NET
  • ADO.NET
  • SQL Server data provider (SqlClient)
  • .NET Remoting
Visual Studio .NET 2003

-

-

-

-

2.0

2

  • 64-Bit platform support
  • Generics
  • SQL Server data provider
  • .NET Remoting
Visual Studio 2005

-

-

-

-

3.0

2

  • Windows Presentation Foundation (WPF)
  • Windows Communication Foundation (WCF
  • Anonymous Types
  • Object and Collection Initializers
Expression Blend

-

-

-

-

3.5

2

  • AJAX
  • LINQ
  • Extension Methods
Visual Studio 2008

1, 2

3.5

-

-

4.0

4

  • Portable Class Library
  • In-Process Side-by-Side Execution
  • Dynamic Language Runtime
  • Tuples
  • Covariance and Contravariance
Visual Studio 2010

3, 4

4, 5

1

1

4.5

4

  • Supports arrays larger than 2 GB on 64-bit platforms
  • Server Garbage Collection improvements
  • API available for Windows Store Apps
Visual Studio 2012 Visual Studio 2013

5

6

2

2

4.6

4

  • RyuJIT Compiler
  • SIMD-enabled types
  • WCF support for SSL version TLS 1.2
Visual Studio 2015

5

6

2

2



August 8, 2016 4:19  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

Most people who start coding (me included) have a natural affinity towards poor design, but we refine our approach slowly and methodically over months and years, probably as follows :-

  • How do I get this app to work?
  • How do I get this app to look good?
  • How do I get this app to work faster?
  • How do I get this  app to work consistently in any environment?

Recently with so many consumers using mobile devices I have begun adding one more serious consideration to that list:

  • How do I ensure that I consume as little power as possible?

I think we have just got to the point where we assume our mobile devices will give us a solid working day of use, but that assumption relies heavily on the idea of performing benign activities with a specific set of lower intensity apps. There was a really interesting post from the Windows blog recently that demonstrated how the Edge browser offers significant improvements in battery life over Chrome, they compared today’s leading browsers across three independent dimensions, here are the recorded results:

Designing for Mobile devices is not all about ensuring the UX/UI is touch friendly and accessible, it needs to include considerations and optimizations for battery consumption. I know I personally carry around power packs to ensure that I remain connected. So how exactly can we test for  our coding decisions to ensure low battery consumption?

Energy Consumption Profile

The Energy Consumption profiler in Visual Studio 2015 captures the activities of the display, CPU, and network connections of a device during a profiling session. It subsequently generates estimates of the power used and the total amount of energy for the profiling session. This is all model based analysis and thus it provides only rough estimate for a lower powered tablet or mobile device. This does not, unfortunately, include important features like GPU, Bluetooth or GPS but it is a reasonable baseline for most other normal activities.

Energy Profile Data (Visual Studio)

Along the profiling timeline you have the ability to set User marks which allows you to quickly tag an area of code that you suspect could be consuming too much power. The code is as simple as this:

if (performance && performance.mark) {
    performance.mark(“Swipe on MainPage”);
}

To setup Energy Consumption Profiling in Visual Studio check out this site.



June 22, 2016 2:02  Comments [1]
Tagged in .NET | Hardware
Share on Twitter, Facebook and Google+

A really smart colleague of mine made me aware of a plugin called Reflixil associated with Reflector (also  ILSpy) that actually allows you to modify an assembly! At first this may not seem like the most useful feature, in truth you could open the solution in Visual Studio, make the change you need and hit build. However, the long tail of platform support can make this simple act a relatively arduous process, and in this case I am trying to make a change that is simple, temporary and for a developer environment.

JustDecompile

My preferred assembly inspection tool is Telerik's JustDecompile, it also has an assembly editing plugin developed by one Sebastian Lebatron, a repackaged version of Reflexil.

Reflixil

You could probably get all the information you need from a hang dump, but not everyone is comfortable with WinDbg, this tool gives you some fairly straightforward options for editing objects and even includes a strong name remover.



June 8, 2016 3:10  Comments [0]
Tagged in .NET | Tools
Share on Twitter, Facebook and Google+

DiscoveryBlvd

Task-based Asynchronous Pattern (or TAP) was introduced into the .NET framework in version 4, and provided a consistent new pattern for arbitrary asynchronous operations. You have probably already used the Task class to perform an operation on a thread pool thread rather than the main application thread.

With this I can perform rudimentary recursive execution of code at regular intervals with the use of Delay followed by ContinueWith:

static void Main(string[] args)
{
    CancellationTokenSource cancellationToken = new CancellationTokenSource();

    Task.Delay(10000, cancellationToken.Token)
        .ContinueWith(x => DoStuff(10000, cancellationToken.Token),
            cancellationToken.Token);
}

private static void DoStuff(int timer, CancellationToken token)
{
    File.ReadAllText(@"C:\Users\somelogin\Desktop\test.txt");

    Task.Delay(timer, token)
        .ContinueWith(x => DoStuff(timer, token), token)
}

The DoStuff method calls itself via ContinueWith call until the token is used to cancel the attempt. What was problematic with this code is that if the antecedent (“a thing or event that existed before or logically precedes another”) get an exception, say the test.txt file does not exist, then you never get the opportunity to create a new asynchronous task. An exception closing the whole system down may not be what you want to happen. The simple way to overcome that is to place a try catch around the ReadAllText method, however, the framework has a solution for that also:

private static void DoStuff(int timer, CancellationToken token)
{
    File.ReadAllText(@"C:\Users\somelogin\Desktop\test.txt");

    Task.Delay(timer, token)
        .ContinueWith(x => DoStuff(timer, token), token)
        .ContinueWith(x => x.Exception.Handle(ex =>
        {
            DoStuff(timer, token);
            return true;
        }), TaskContinuationOptions.OnlyOnFaulted);
}

In the above example I created an additional ContinueWith but this one is designed to be called when an exception occurs in the antecedent (using TaskContinuationOptions.OnlyOnFaulted), it then gives you the option to still recursively call the DoStuff method.

This whole TAP pattern is really well thought out in my humble opinion.



April 13, 2016 3:29  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I am doing a lot of work with clients and internal teams that are running software services for weeks and even months at a time and who end up needing assistance in understanding why a particular process or service is gradually consuming more and more memory. Anyone who has been active in supporting application will immediately recognize that Performance Monitor is your best bet (perfmon).

So to avoid repeating myself a dozen time I thought I could list the steps here:

  1. Hit Windows Key + R to bring up the Run dialog box.
  2. Type in perfmon and hit Enter.
  3. Open the Data Collector Set and right click on the User Defined folder.
  4. On the context menu select New->Data Collector Set.
  5. In the dialogue box define an appropriate Name, select the Create manually (Advanced) option, and click Next.
  6. On the next screen enable Performance counter option only and click Next.
  7. Add the following Performance counters for your process(s):
    • Process->YourProcess
      • Private bytes
    • .NET CLR memory->YourProcess
      • # of Bytes all Heaps
      • Gen 0 heap size
      • Gen 1 heap size
      • Gen 2 heap size
      • Large Object Heap size
  8. Modify both the Sample interval and the Units as appropriate and hit Next.
  9. Select the Root directory where you would like to save collector set and hit Next.
  10. Update what security context the data collector set should Run as, select the Open properties for this data collector set and hit Finish.
  11. In the properties dialogue box select the Schedule tab to determine when data collection should stop and start. Alternatively right click on the data collector set and hit Start for immediate collection.

Here are the definitions of the counters you are likely to be most interested in:

Private bytes indicate the amount of memory that the process executable has asked for but not necessarily the amount it is actually using. There is no way to tell whether a change in private bytes was due to the executable itself, or due to a linked library.

# of Bytes all Heaps displays the sum of the Gen 1 Heap Size, Gen 2 Heap Size, and Large Object Heap Size counters. This counter indicates the current memory allocated in bytes on the garbage collection heaps.

Gen 0 heap size displays the maximum bytes that can be allocated in generation 0; it does not indicate the current number of bytes allocated in generation 0.

Gen 1 heap size displays the current number of bytes in generation 1; this counter does not display the maximum size of generation 1. Objects are not directly allocated in this generation; they are promoted from previous generation 0 garbage collections. This counter is updated at the end of a garbage collection, not at each allocation.

Gen 2 heap size displays the current number of bytes in generation 2. Objects are not directly allocated in this generation; they are promoted from generation 1 during previous generation 1 garbage collections. This counter is updated at the end of a garbage collection, not at each allocation.

Large Object Heaps size displays the current size, in bytes, of the Large Object Heap. Objects that are greater than approximately 85,000 bytes are treated as large objects by the garbage collector and are directly allocated in a special heap; they are not promoted through the generations. This counter is updated at the end of a garbage collection, not at each allocation.



March 2, 2016 3:38  Comments [0]
Tagged in .NET | Debugging | Tools
Share on Twitter, Facebook and Google+

Last week I needed to take a quick peek at some of the methods involved in the generation of JavaScript AJAX proxy scripts. I had a vague notion of what class and namespace was involved but not much else, and I needed to confirm how the URI was evaluated from base address. Much to my surprise I found a way browse the .NET Framework source code online (Reference Resource), it also has search and navigation which was, apparently, generated by the Roslyn complier (the new compiler is a gift that just keeps giving).

Reference Resource .NET Framework

This is incredibly useful for me especially as in most cases I am just looking to peek at a method to establish assumptions.

Configure Visual Studio Debugging

The next logical step is to go beyond simple searches and start directly debugging the Framework, this can be accomplished in Visual Studio with the following steps:

  • Go to the Tools -> Options -> Debugging -> General menu
  • Disable just my code
  • Disable step over properties and operators
  • Disable require source files to exactly match the original version
  • Enable .NET framework source stepping
  • Enable source server support

Download .NET framework source code

Of course if you prefer to use the search capabilities of Visual Studio (or NotePad++) you could simply download the versions of the .NET Framework you are interested in:



February 1, 2016 4:46  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I have been playing with this list of topics that I think should be of concern to most full stack .NET developers. I have bounced positions recently and so technical interviews have been in the forefront of my mind. While I can say assuredly that I know these topics on a practical level, specific and important interview question can become difficult to recall, especially when the question is poorly articulated.

So this is a list of MSDN articles, blogs, wiki entries and books. I thought this would be a great placeholder for this list. This is clearly a fluid stack and I will probably repeat this exercise in a couple of years.

Let me know what you think, I am happy to add, remove or modify anything.

Design Patterns

SOLID Design

.NET 4.5/5

Model-View-Controller (MVC)

ASP.NET MVC

ASP.NET Web API

General ASP.NET

Fluent Validation

JavaScript

CSS and .LESS

Kendo UI

Claims-Based Identity

Federated Authentication

Windows Identity Foundation (WIF)

Service Oriented Architecture

Test-Driven Development

Windows Communication Foundation (WCF)

Dependency Injection / Inversion of Control

Autofac

Structure Map

OWIN

OAuth

log4net



January 20, 2016 1:28  Comments [3]
Tagged in .NET | C# | Design Pattern
Share on Twitter, Facebook and Google+

I have done several projects over the year that require me to produce auto generate source code, mainly to avoid repetitive code tasks, to accomplish this I have used tools similar to CodeSmith. A couple of weeks ago I was tasked with auto-generating code using the CodeDOM, I was actually surprised to find out that this kind of source building framework existed natively, you can create every imaginable object type inherent to the .NET stack. Here is a simple example:

CodeCompileUnit compileUnit = new CodeCompileUnit();
CodeNamespace samples = new CodeNamespace("Samples");
samples.Imports.Add(new CodeNamespaceImport("System"));
CodeTypeDeclaration myclass = new CodeTypeDeclaration("Class1");
samples.Types.Add(myclass);

CSharpCodeProvider provider = new CSharpCodeProvider();
string sourceFile = nameof(myclass ) + provider.FileExtension;

using (StreamWriter sw = new StreamWriter(sourceFile, false)) { IndentedTextWriter tw = new IndentedTextWriter(sw, " ");
provider.GenerateCodeFromCompileUnit(compileUnit, tw,new CodeGeneratorOptions());
tw.Close(); }

Now the above example is almost verbatim from MSDN, and actually outputs something like this, notice the misplaced using statement inside the names:

namespace Samples {
using System;


public class Class1 {
}
}

To avoid this you need to initially create a blank namespace and add your required using statements there, later we add both namespaces to our CompileUnit like this:

CodeCompileUnit compileUnit = new CodeCompileUnit();
CodeNamespace blankNamespaces = new CodeNamespace();
blankNamespaces.Imports.Add(new CodeNamespaceImport("System"));
CodeNamespace samples = new CodeNamespace("Samples");
CodeTypeDeclaration myclass = new CodeTypeDeclaration("Class1");
samples.Types.Add(myclass);

compileUnit.Namespaces.Add(blankNamespaces); // Add the blank using statement
compileUnit.Namespaces.Add(samples);

CSharpCodeProvider provider = new CSharpCodeProvider();
string sourceFile = nameof(myclass) + "." + provider.FileExtension;

using (StreamWriter sw = new StreamWriter(sourceFile, false))
{
IndentedTextWriter tw = new IndentedTextWriter(sw, " ");
provider.GenerateCodeFromCompileUnit(compileUnit, tw, new CodeGeneratorOptions());
tw.Close();
}

This leads to a more syntactically appropriate source file

using System;


namespace Samples {


public class Class1 {
}
}



December 21, 2015 9:07  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+
I have been developing software for years but this year is the first time that I have had the opportunity to work with an Oracle database (on Linux no less). Now, I am no DBA so most of the architectural and philosophical differences are lost on me but there were several small things that reminded there are indeed nuances within Microsoft SQL Server and its associated T-SQL scripting language that I take for granted. So if I could go back six months these are the differences I would highlight for myself to smooth the transition.

November 11, 2015 1:41  Comments [0]
Tagged in .NET | C# | Sql
Share on Twitter, Facebook and Google+

TypeScript is a superset of JavaScript that has been in the community for a few years now, it is currently on GitHub and before that the project could be reviewed and downloaded from CodePlex. The promise of TypeScript is that it offers JavaScript semantics along side concepts like classes, modules, and interfaces. These features are available at development time, but are ultimately compiled into simple JavaScript.

DuoCode

DuoCode is itself a C# compiler powered by Rosalyn and seems to be side stepping TypeScript completely by allowing you to compile to JavaScript directly from C# 6.0! The obvious weakness for TypeScript, in hindsight, is that it forced you to learn a JavaScript like language in order to get the features you were missing in C#. Why not just avoid the middle man completely!

It sounds to good to be true but you can try this out right now in your browser here, the following image is a simple hello world example, but feel free to experiment. DuoCode supports Method overloading, Events and delegates, Attributes, Interfaces, Extension methods, Generics, and so much more.

CSharpToJavaScript

Debugging is also not a problem here as DuoCode creates source mappings, which allow you to view and debug the original C# source files directly inside the browser (with Developer Tools) or in Visual Studio.

This really does push along the idea that JavaScript is simply the Assembly language of the Web, the only problem with that statement is that so many people continue to begin web development with JavaScript rather than assume it will be the last compilation step after development has been finished. I am a C# sycophant so I am hopeful DuoCode will push us all in the right  direction.

Related Posts



October 28, 2015 3:05  Comments [2]
Tagged in .NET | C# | JavaScript
Share on Twitter, Facebook and Google+
The Rosalyn Project was more than just about incremental improvements to C# language features, it was about rewriting the compiler so that you can enable rich experiences during the process of software development and coding. Rosalyn has a rich analyzer which allows us to present potential issues to developers as they code rather than during some post-programming analysis. Some of us may have grown use to that kind of constant feedback using tools like Resharper, however, this type of analysis is coming at you right out of the box, with a rich API.

August 24, 2015 23:38  Comments [0]
Tagged in .NET | Visual Studio
Share on Twitter, Facebook and Google+
Back in the day I was a huge fan of .NET Reflector, it was so well designed (and appropriately named) that for a long time it became the noun and verb for all my .NET decompilation needs. As has been well documented, the tool has now become one of many products offered by Redgate, and as of the time of writing costs $95 and $199 for the standard and pro version respectively. Unfortunately for me when I left my last company I lost access to my paid version of Reflector, and while I had originally started using ILSpy I was never quite happy with the UX. Now I think I have finally found an alternative in JustDecompile from Telerik.

July 7, 2015 19:00  Comments [0]
Tagged in .NET | C# | Tools | Visual Studio
Share on Twitter, Facebook and Google+

I have spent a lot of time talking about hiring, interviewing and vetting developers without really talking about how a developer should evaluate a potential employer. I have been with the same employer for over nine years now and so my expectations are generally met in a slow incremental march to my imagined developer nirvana. In recent weeks I have gained a new manager and, to his credit, he is making meaningful inquiries about our environments and culture and looking for opportunities to make our work experience more rewarding.

So when you have the opportunity to ask an existing or potential employer about improving your work life I think the following is good place to start.

Hardware

This is obviously a moving target (Is Moore’s law still a thing?) and really depends on your specific technology stack but as a “nose to the grind stone” developer you should be requesting a machine that has maxed out RAM  and an SSD. As of today I would say 8 GB would be your minimum and you should consider more if you are running VMs on a regular basis.

If you are being offered anything else, or your IS department does not differentiate between developer PCs and others, then it may be that your company or manager do not understand what you do. It may also be that the hardware budget is grossly underfunded (bad sign either way). Quick tip is ask the developer who has been at the company the longest when that the last hardware refresh happened for them (assuming the more tenured developers have the worst devices).

Software & Platforms

MSDN licenses should be a staple for your entire team! This is often assumed but you should ask and ensure that is the case. Developers today should also be asking about access to Mac and mobile devices as they are playing greater part in our software development and testing.

Do know what versions of the .NET your current software development is bound to? How does your team and your customer deal with upgrades? Will that limit the kinds of projects that you will work on in the future? Ask, your future viability may depend on it!

Culture

How many people on your team are involved in the craft of software outside of the 9-5? Are folks involved in open source projects or Stackoverflow? Are they involved in local developer groups? Being a part of an engaged team that is looking at ways to improve the products they work on as well as themselves is critical. This kind of collective mastery is self reinforcing and can provide an enriching and supportive environment.

I have not had to deal with “on-call” rotations in a direct way for many years, but you should understand the existing support structure and how the work your produce is related to it (directly or in some tangential Tier 3 rotation). It is always disconcerting to receive undesired calls from your work during well earned time off, so ask questions about this and ensure you have figured out what the worst case scenarios can look like and whether that is right for you.

Training & Conferences?

This is an important issue that I have found can easily turn into a shell game. The first step is to understand the training budget, not in a amorphous sense but in dollars and cents. Find out how much of that budget is applicable to you and if there is a rotation for going to conferences. Do Senior Developers get first bite? My advice is to ask early and ask often, provide ball park figures for paid conferences and throw in local events that might be free as an alternative.

My manager has been discussing the idea of paid leave that represents an organized opportunity to learn in more informal ways (like Pluralsight ). For those of us with the desire and discipline, it helps reduce costs and simultaneously provides the benefits of repeatability and accessibility.


I am not suggesting for a second that missing items from the above list should result in a mass revolt, or that somehow you work is less important or meaningful, but I do think you should spend time considering what ways your employer can show a firm commitment to you, your work, and your craft.

Have I missed anything? What would you add?



December 9, 2014 2:21  Comments [0]
Tagged in .NET | Development Process | Programming | Software
Share on Twitter, Facebook and Google+

Connect is the cloud-first, mobile-first, developer-first, virtual event taking place today (and tomorrow) and the Microsoft team has been making some pretty amazing announcements, that genuinely transform the future opportunities  for .NET developers.

Microsoft is open sourcing the .NET Framework Libraries (MIT license), projects like Mono who have relied on contributors to their project who have not looked at disassembled .NET code, but can now freely introduce the .NET framework directly into the Mono project. The code is available here, just amazing!!!

Additionally Microsoft has begun redesigning .NET as the .NET Core which produces simpler versions of class libraries, the project is hosted on GitHub here. The .NET framework team also spent a lot of time trying to speed up the JIT compiler last year and released RyuJIT, this JIT compiler will *also* available under the same .NET Core release.

This bears repeating the MIT License is a permissive as it gets, and this also comes with a patent promise! This is Microsoft really living the open source software ideal!

Other notable updates

What announcement are you most eager to check out?


Related Posts



November 12, 2014 16:58  Comments [0]
Tagged in .NET | ASP.NET | Visual Studio
Share on Twitter, Facebook and Google+

I came across an interesting problem recently that reminded about why I both love and remain deeply suspicious of the .NET framework. A web server was retrieving images from some service and almost over night those images refused to render. For the sake of brevity (and security) I am missing out much of the architecture but essentially we were taking a valid Base64 encoded string, converting it into a byte array, then creating an image from that byte array (pretty standard stuff). I was able to replicate the portion of the code causing the issue and It seemed to stem from an innocent looking static method on the Image object called FromStream. Here is the code:

class Program
{
static void Main(string[] args)
{
byte[] backImageBytes = null;
try
{
string val = GetBase64String();
backImageBytes = Convert.FromBase64String(val);
using (MemoryStream m = new MemoryStream(backImageBytes))
using (Image backImage = Image.FromStream(m,false, false))
{}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
Console.ReadLine();
}
}

Here was the exception I was getting on the web Servers, and it was only occurring on the web server, I simply could not replicate it on my developer laptop any other machine for that matter.

Exception: System.ArgumentException
Message: Invalid parameter used.
Source: System.Drawing
    at System.Drawing.Image.FromStream(Stream stream, Boolean useEmbeddedColorManagement, Boolean validateImageData)
    at System.Drawing.Image.FromStream(Stream stream, Boolean useEmbeddedColorManagement)
    at System.Drawing.Image.FromStream(Stream stream)
    at ConsoleApp1.Main(string [] args) in C:\dev\ConsoleApp1\Main.cs:line 132

My immediate assumption was that this was some problem with an older version of the framework that the web server was using but I quickly realized that appropriate versions existed on all the servers and desktop machines I was testing . So at this point I wanted to see what the Image.FromStream() method was actually doing (image from Reflector).

Image.FromStream()

So I immediately see SafeNativeMethods.Gdip.GdipLoadImageFromStream() and roll my eyes, this is not going to end well (or here for that matter). So what is it? *It* is a DllImport, a platform invoke that allows managed code to call unmanaged functions from DLLs that *should* exist on your server.

[DllImport("gdiplus.dll", CharSet=CharSet.Unicode, SetLastError=true, ExactSpelling=true)]
internal static extern int GdipLoadImageFromStream(UnsafeNativeMethods.IStream stream, out IntPtr image);

So what I now appreciate is that the framework only provides a relatively thin layer of abstraction over a native Win32 assembly by the name of gdiplus.dll (GDI+) for image related manipulation. The above error is actually occurring at a level that it is somewhat more fundamental to Windows. The truth is that the server was probably a little older than it should be, and even a subtle change in the image type (say gif to tiff) would result in this kind of failure, and without a GDI+ update this error would have probably continued to occur.

So …what do I love about .NET framework? The consistency you are presented with no matter what server or machine you are running on. What am I suspicious about? The vast portions of “core” code that rely exclusively on unmanaged code that have nothing to do with the version of the .NET framework you have installed. You should take a look one layer down it may actually surprise you how many PInvoke calls you are currently making.


Related Posts



September 15, 2014 12:53  Comments [0]
Tagged in .NET | Windows
Share on Twitter, Facebook and Google+

I still remember being in Detroit in late 2002 with a conference room full of developers, Microsoft was pitching the wonders of a brand new framework called .NET. The pitch that really pulled me in was that the CLR guaranteed that my code would work anywhere the .NET framework was installed (no more checking for and installing COM DLLs). Anyone who distributed any significant software back then knew how much this kind of promise would resonate with frustrated operations teams everywhere. Your chosen language (C# or Visual Basic) compiler would  convert your code to the Common Intermediate Language (CIL), and the Common Language Runtime (CLR) would convert that into native code. Done! your code works everywhere … well … everywhere Windows runs.

At the time I was a Visual Basic programmer so the secondary appeal was that I could “upgrade” to VB.NET and work in concert with C# developers and from there make a clean break into C#. As it happens the switch to C# was not nearly as problematic as I had imagined and so I go the chance to jump ship. So while the idea of supporting multiple languages with a particular product was a much lauded benefit most folks opted to move to C# anyway. Back then Visual Basic never quite sat right with me, it reminded me of the opening conversation between Morpheus and Neo in the Matrix.

What you know you can't explain, but you feel it. You've felt it your entire life, that there's something wrong with the world. You don't know what it is, but it's there, like a splinter in your mind, driving you mad. It is this feeling that has brought you to me. Do you know what I'm talking about?  - Morpheus

Yes I do Morpheus, you are talking about Visual Basic 6!

Fast forward these ten plus years and we have  significantly more kinds of operating systems, devices and languages that drive and, in some cases, inhibit our software productivity. So today I wanted to talk about the approaches you might want to consider for being productive across an ever increasing range of platforms.

About 3 years ago I started to notice that my main programming skills (C#) were not stretching as effectively into a new breed of products targeted at the everyday consumer (as opposed to enterprise consumers). With the ascendency of iOS and Android the skills I honed over years in Visual Studio were only marginally helpful and these devices seem to be springing up everywhere. So in contradiction to the CIL and CLR approach, the idea of becoming a true polyglot appears to be more important than ever.

The Polyglot

So if you are of the iOS persuasion then your approach would inevitably turn to the  Xcode IDE and Objective-c (now Swift of course) on a Mac. Having never owned a Mac (much to the chagrin of my better half) I have been left out of any serious iOS development thus far. In general the IDE has witnessed a slow hard climb into the light but the most recent updates have some decent improvements, including the live rendering at design time (critical for my work flow).

xcode

The recommended approach for Android is the Eclipse IDE in combination with Java (it also has plugins for C/C++, PHP and others) on Windows. For me the Eclipse IDE has many more parallels to Visual Studio than does the Xcode IDE. Additionally Eclipse has been around since 2004 so many of the features I expect within an enterprise level IDE are readily available.

eclipse

Java is relatively easy for me to read, but what is even more familiar is the use of Android XML, it is appropriately different to XAML, but as long as you are comfortable with general XML schema you can quite confidently make changes inline if necessary.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<TextView android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello World!" />
<Button android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello World!" />
</LinearLayout>

Aside: I do find it fascinating that most larger companies (like mine) appear to prefer the notion of developer specialists which runs counter to the prevailing Sprint and DevOps influence. I wonder which approach will win out?

New Portability features in Visual Studio

The Visual Studio team is doing an increasingly brilliant job of embracing the diversity of the programming world especially as it relates to open source and the web. What I am finding more remarkable is the embrace of other platforms, for example, the new .NET Portability Analyzer helps you determine how flexible your application is across a variety of platforms (including Xamarin). I fully expect Xbox will be added to this list in the very near future to and thereby support a truly universal windows app concept. The .NET API Port team is eager for your feedback!

dotnetportability

This is an interesting first step and I am hopeful that this is a positive harbinger of collaborations to come.

Ultimate Portability in Xamarin

I have a growing appreciation for folks who write brilliant software and further have the discipline to write text to explain concepts clearly and articulately. In my chosen profession then, I hold Charles Petzold in the highest regard. I have read most of his books and follow his blog with some interest, while he blogs more about music these days it is still generally fascinating stuff (he actually encouraged folks to go see Atlanta Orchestra while at a Microsoft conference, it was quite brilliant!).

His seminal book, Programming Windows, laid the foundation for me and many Windows programmers of my generation. So of course I was then pleasantly surprised to see him working full time for the Xamarin team, writing a new book (“Creating Mobile Apps with Xamarin.Forms”). For me this upcoming book is like a bellwether for future mobile development practices for traditional Windows developers. I was initially skeptical that you could make compelling products across multiple platforms from a single IDE but I am seeing more and more evidence that this approach is both practical and indeed optimal.

XamarinDeviceSupport

So what is your approach? Are you conversant in multiple languages and multiple IDEs? Or are you relying on Xamarin and C# as *the* approach?



August 11, 2014 17:24  Comments [0]
Tagged in .NET | Android | C# | iOS | Programming | Windows
Share on Twitter, Facebook and Google+

Earlier this year I spent significant time with a vendor troubleshooting requests we were sending across the network. In the end I was convinced of my need to rely almost exclusively on the vendor to verify that the certificate were correctly applied. Well, a colleague shared this link that describes how to configure network tracing for WCF service.

Configure Network Tracing






































type="System.Diagnostics.TextWriterTraceListener"
initializeData="network.log"
/>



When troubleshooting complex systems never trust the well intentioned opinion of even the savviest technical mind, get the facts. This would have saved me days!

Related Posts



July 14, 2014 22:03  Comments [0]
Tagged in .NET | ASP.NET | WCF
Share on Twitter, Facebook and Google+

There was a metric ton of information coming out of Build this year, and I am still trying to parse all of it for my personal development. While the casual observer would have seen the the Windows Phone and Azure announcements, it would be patently false to suggest that the ASP.NET platform has been left fallow this year. The following is a list of videos that directly or indirectly touch the world of .NET Web developers!

It will be a few more months, at least, before I get through all these videos. Happy learning!



April 16, 2014 21:50  Comments [0]
Tagged in .NET | ASP.NET | C# | JavaScript | Training
Share on Twitter, Facebook and Google+

Every year or so I manage to uncover a gap in my knowledge as it relates to strings, character sets, and encoding. I have just started embracing this as part of the cycle of mastering (or attempting to master) any given topic. Whenever I feel the need I resort to the most fundamental mechanisms of learning. It is a simple act of reading, practice and memorization.

Reading

These are the two articles I use to rollback my atrophied memory on Strings:

Practice

The problem I had encountered this time was born from the need to serialize a simple object into a valid UTF-8 xml string, the following shows the code I was originally using to get this accomplished:

public class SomeItem
{
public string SomeDataElement;
public string AnotherElement;
}

class Program
{
static void Main(string[] args)
{

SomeItem si = new SomeItem();

XmlSerializer xml = new XmlSerializer(si.GetType());
StringWriter stringwriter = new StringWriter();
xml.Serialize(stringwriter, si);

Console.WriteLine(stringwriter.ToString());
Console.Read();

}
}

Pretty straightforward stuff, however, the resulting serialized string would always have encoding defaulted UTF-16 as follows:

<?xml version=\"1.0\" encoding=\"utf-16\"?>


The thing to internalize here is that .NET is based upon the Unicode character set, and even more specifically, UTF-16 encoding and so whenever Streams are used to output any data you should explicitly define how you want the characters encoded. In this particular example the “Encoding” property associated with StringWriter is actually a read-only property, and so I was forced to inherit StringWriter and override the defined encoding as follows:

public class StringWriterUTF8 : StringWriter
{
public override Encoding Encoding
{
get{ return Encoding.UTF8;}
}
}


class Program
{
static void Main(string[] args)
{

SomeItem si = new SomeItem();

XmlSerializer xml = new XmlSerializer(si.GetType());
StringWriterUTF8 stringwriter = new StringWriterUTF8();
xml.Serialize(stringwriter, si);

Console.WriteLine(stringwriter.ToString());
Console.Read();

}
}

Memorization

Well, that is what this post is for…



January 9, 2014 3:11  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I have used Bing because it is convenient and because it provides rewards for XBox, however, whenever I would need information related to MSDN (that is *Microsoft* Software Developer Network) I would be forced into the arms of Google Search.

Today that changes Microsoft announced that API search would be integrated directly into Bing search.

In the example below, you can see that we have surfaced relevant information directly on the results page so people don’t have to click through to MSDN to see if they have the right reference material. At a glance you can see:

1. Find the .Net versions supported.

2. View Code snippets explaining the syntax of this particular class.

3. Navigate to the appropriate version directly from the results page.

4. View syntax in the supported languages.

Image_thumb_0FA28EFB

Finally!



November 8, 2013 0:41  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

This week I found myself having to explain to well meaning folk the important differences between encryption and a hash function. Simply put encryption, comes with the equal and opposite notion of decryption, while a hash function is designed to be a one way process (once its hashed there is no way back). I am purposefully ignoring the highly advance mathematics that goes into both encryption and hash functions (an engineering approach rather than an academic one).

So once the notion of hashing came up we talked about a variety of techniques and for some unknown reason MD5 was mentioned and started to be used synonymously with hashing in general. It was quickly brought to our attention that MD5 was compromised and badly (emphasis mine):

In order for a software integrity checksum or a digital signature based on a hash value to be of any value, the cryptographic hash function that is used must be collision resistant. That is, it must be practically impossible to find different messages that have the same hash value. Otherwise, a miscreant can use a single hash value to commit to more than a single file.

The cryptographic hash function MD5 was shown to be not collision resistant, by prof. Xiaoyun Wang and her co-authors, in 2004 (see the EuroCrypt 2005 paper "How to break MD5 and other hash functions").

I am not sure there is anything sadder than a compromised hash function, so MD5 is out! How about SHA-1? Well technically we have the same issue, and by technically, I mean at considerable cost in raw computing power:

As of 2012, the most efficient attack against SHA-1 is considered to be the one by Marc Stevens with an estimated cost of $2.77M to break a single hash value by renting CPU power from cloud servers. Stevens developed this attack in a project called HashClash, implementing a differential path attack.

So when selecting a hashing algorithm most security experts will anticipate and expect a minimum internal state size of greater than 160 bits, this ensures that the opportunity to find a collision becomes unrealistic even with a massive amount of computing power. In order to meet that minimum requirement SHA-224/SHA-256 becomes the baseline, here is a code sample:

class Program
{
static void Main(string[] args)
{
//Even though I am using a secure hashing algorithm
//I still believe you should be using a salt!
byte[] salt = CreateRandomSalt();

byte[] somebytes = Encoding.UTF8.GetBytes("THIS IS SOME TEXT TO HASH");
string hash1 = CreateHash(somebytes, salt);

byte[] morebytes = Encoding.UTF8.GetBytes("THIS IS SOME TEXT TO HASh");
string hash2 = CreateHash(morebytes, salt);

//Not likely to see a collision o_O
if (hash1 == hash2)
Console.WriteLine("We have a match");
else
Console.WriteLine("Does not match");

Console.ReadLine();

}

private static string CreateHash(byte[] textbytes, byte[] saltbytes)
{
byte[] textandsalt = new byte[textbytes.Length + saltbytes.Length];

//fill out the initial by text bytes
for (int i = 0; i < textbytes.Length; i++)
textandsalt[i] = textbytes[i];

//pad with the salt bytes (not necessary but safe)
for (int i = 0; i < saltbytes.Length; i++)
textandsalt[textbytes.Length + i] = saltbytes[i];

SHA256Managed hash = new SHA256Managed();
byte[] hashedbytes = hash.ComputeHash(textandsalt);

return Convert.ToBase64String(hashedbytes);
}

private static byte[] CreateRandomSalt()
{
Random randgen = new Random();
int saltsize = randgen.Next(8, 12);

//Define a salt array
byte[] saltbytes = new byte[saltsize];

RNGCryptoServiceProvider rngprovider = new RNGCryptoServiceProvider();

//Create random salt
rngprovider.GetNonZeroBytes(saltbytes);

return saltbytes;
}

}


November 6, 2013 14:40  Comments [0]
Tagged in .NET | C# | Security
Share on Twitter, Facebook and Google+

I currently work for a small company that provides software services to the financial and banking industry. We provide a a rather large SDK that allows for rapid development of custom solutions, however, it became evident that in order to freely iterate over specific parts of our core product that unnecessary code dependency within our product line would have to be removed.

This pervasive code inter dependence forced us to perform vast rebuilds to replace or update even mundane portions of source code. Classes became difficult to test in isolation direct references to dependencies existed everywhere, thus the final implementation of the dependencies had to be available at compile time. Over the last three plus years these issues have been overcome by implementing a variety Inversion of Control techniques.

The Inversion of Control (IoC) pattern can be implemented in several ways, personally I have pretty much settled on using and supporting the Dependency Injection (DI) pattern and/or using an IoC Container framework.

Dependency Injection

Imagine the following class, unfortunately it is designed such that the dependent UploadImage class is directly tied to implementation of PhotoEditor. Now any changes required to UploadImage are directly tied to the compilation of the main class.

public class PhotoEditor
{
private UploadImage _uploader;

public PhotoEditor()
{
this._uploader = new UploadImage();
}

public Upload()
{
this._uploader.Send();
}
}


The following shows PhotoEditor again but this time the upload class is implemented with an interface. This decoupling of the actual implementation means that we can complete wholesale changes to UploadImage, and because the PhotoEditor class uses dependency injection, its dependencies can be replaced with mock implementations when testing.

public class PhotoEditor
{
private IUploadImage _uploader;

public PhotoEditor(IUploadImage uploader)
{
this._uploader = uploader;
}

public Upload()
{
this._uploader.Send();
}

}

The dependency injection pattern has a few liabilities for example there are more solution elements to manage. You have to ensure that, before initializing objects, that the dependency injection framework can resolve the dependencies required. Finally the code can become more difficult to follow.

IoC Container

The focus of an IoC/DI framework is an object called a container which will obtain knowledge about components needed by your application, it will attempt to figure out which component you might use. In practice the IoC container will allow us to register components into the container and then you ask the container to give you an instance of a component, this means you no longer need to instantiate classes using the traditional constructors mechanism. Our pattern is as follows:

  • Create a ContainerBuilder.
  • Register components.
  • Build the container and store it for later use.
  • Create a lifetime scope from the container.
  • Use the lifetime scope to resolve instances of the components.

Here is an example of this premise using AutoFac:

using System;
using Autofac;

namespace DemoApp
{
public class Program
{
private static IContainer Container { get; set; }

static void Main(string[] args)
{
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleOutput>().As<IOutput>();
builder.RegisterType<TodayWriter>().As<IDateWriter>();
Container = builder.Build();


WriteDate();
}

public static void WriteDate()
{
// Create the scope, resolve your IDateWriter,
// use it, then dispose of the scope.
using (var scope = Container.BeginLifetimeScope())
{
var writer = scope.Resolve<IDateWriter>();
writer.WriteDate();
}
}
}
}

Also check out the Castle Windsor IoC container for an alternate solution approach.



October 12, 2013 4:06  Comments [0]
Tagged in .NET | Programming
Share on Twitter, Facebook and Google+

Microsoft Virtual Academy (MVA) is free for anybody interested in advancing their career through training on Microsoft technologies. To get started, simply pick a course that interests you and begin watching the videos in each module. You’ll be able to download the presentations and also test your learning with quizzes. Once you have completed every action in a course, you will get a certificate of completion.

I am finding more and more developers who are just looking to ramp up skills in a particular technology, one of the more popular for ASP.NET Web Forms developers, for example, is the Developing ASP.NET MVC4 Applications Jump Start. Most videos are approximately 40 minutes long with a 10 minute assessment. You can build the necessary skills to help grow in your professional career with Microsoft technologies. There are over 200 courses across App development for HTML5, Windows and Windows Phone, Microsoft Office 365, SQL Server, Azure, and System Center. Create your own learning plan, track progress, and earn certificates!



September 28, 2013 15:07  Comments [0]
Tagged in .NET | Apps | C# | Cloud Services | JavaScript | jQuery | Training | Windows | Windows Phone | Windows Store
Share on Twitter, Facebook and Google+

In a previous post I was genuinely excited (it’s the developer geek in me) about the use of the asynchronous patterns in Windows Runtime library, however, what is a little less obvious and even more fun is using this pattern within your own code. The example I focused on last post showed us waiting for Task<Result> (specifically string), however you can wait for almost anything and everything, in order to do this we take advantage of extension methods, you will probably recognize the pattern.

Waiting for a TimeSpan

public static TaskAwaiter GetAwaiter(this TimeSpan time)
{
    return Task.Delay(time).GetAwaiter();
}

This extension method allows you to treat TimeSpan as you would any asynchronous method as follows:

await TimeSpan.FromMinutes(1);

Waiting for a few seconds

public static TaskAwaiter GetAwaiter(this Int32 seconds)
{
    return TimeSpan.FromSeconds(seconds).GetAwaiter();
}

It seems even more reasonable to wait for specific amount of time, a simple value type like Int32 can now be waitable:

await 600;

You will notice when you hover over both TimeSpan and Int32 intellisense indicates that these types are awaitable.

awaitable

 

 

 

 

Waiting for your Downloads

So we have really only scratched the surface with the idea of waiting for a specific time frame or a few seconds. As you can imagine you can actually wait on almost any single or even multiple items. In the following example you can asynchronously wait for a series of downloads, this could just as easily be a list of processes, services, or events.

public static TaskAwaiter GetAwaiter(this IEnumerable<Task> tasks)
{
    return Task.WhenAll(tasks).GetAwaiter();
}

The following allows you to wait for all the downloads to complete before continuing using the WhenAll (completes when all the supplied tasks have completed).

List<string> urls = new List<string>();
urls.Add("http://www.PoppaString.com");
urls.Add("http:///www.PoppaString.com/apps");
urls.Add("http:///www.PoppaString.com/design");

HttpClient client = new HttpClient();
await from url in urls select client.GetStringAsync("url");


August 16, 2013 22:47  Comments [0]
Tagged in .NET | C# | Programming
Share on Twitter, Facebook and Google+

.NET Garbage Collector is  a very effective, high-speed allocation method with exceptional use of memory, and limited long-term fragmentation problems, however it is possible to do things that will give you reduced performance and so knowing how garbage collection works can be really important. I do a lot of development at the business layer these days and so I tend not to have to worry about how .NET is cleaning up after my work outside of ASP.NET Application Pools, most of the heavy lifting (for me) is generally done at another layer.

Simplified GC Generation Model

Objects are processed based on the following rules:

  • Generation 0: This is the youngest generation and contains short-lived objects that have never been marked for allocation. An example of a short-lived object is a temporary variable. Garbage collection occurs most frequently in this generation.
  • Generation 1: This generation contains short-lived objects and serves as a buffer between short-lived objects and long-lived objects. These objects have survived a garbage collection, that is to say they have been marked for collection but never removed.
  • Generation 2: This generation contains long-lived objects. An example of a long-lived object is an object in a server application that contains static data that is live for the duration of the process. These objects have survived more than one sweep of the garbage collection.

Collection, prior to .NET 4 was based on a technique referred to as a Concurrent Garbage Collection. Under this original model collection takes place for any generation 0 and generation 1 objects. During this time active threads within the current process are suspended, this ensures that active threads do not access the heap during clean up, once the collection is completed the active threads resume. In theory that suspension time is very low, however, depending on your application “low” could mean a lot of things (I can already here our non-managed associates crying into their keyboards).

Background Garbage Collection (.NET 4.0)

.NET 4 has been around for a while now so I was actually surprised to learn that garbage collection had seen some improvements. First of all .NET 4.0 has improved the way in which it deals with thread suspensions when cleaning up the managed heap. In effect when dealing with with both generation 0/1 and generation 2 clean up simultaneously, the collections for the generation 0/1 can occur on a dedicated background thread. Together with reducing the overall clean up time, there is a marked improvement in the predictability and length of the GC stop time, this is extremely important for real-time systems.

In theory everyone is supposed to benefit from these recent improvements but I cannot help but think that there must be a couple of use cases where you will have to break open the Profiling tool and unleash the Performance Counters. Lets all take a deep breath and take a look at those memory dumps.



July 23, 2013 17:10  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I have been seeing and using AutoMapper with greater frequency recently. Its premise is simple yet incredibly useful in the grand scheme of things. Automapper provides a very clean and intuitive way to map types. We have all had the tedious task of getting code from the database layer into a service layer, or from Entity Framework to … something else. The code associated with this type of mapping has either been stored in some ill conceived utility class or even worse it sits at the implementation layer before the output code is returned. Either way, mapping code is uninspiring and how we have been doing it has been just wrong. This is how I now solve that problem:

Install AutoMapper from the Package Manager console as follows (it is a single assembly download):

PM> Install-Package AutoMapper

In most of the instances I have been using the static Mapper method, which should only occur once per AppDomain. For an ASP.NET application, for example, that would probably be completed in the Application_OnStart event of the Global.asax.cs file. For the sake of a rudimentary example lets consider the following basics types, one that is returned from a database API (EmployeeDB) and one which would be used to return data frEom a service endpoint (Employee).

    public class EmployeeDB
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime DOB { get; set; }
        public string SSN { get; set; }
    }

    public class Employee
    {
        public string Name { get; set; }
        public int Age { get; set; }
        public string SSN { get; set; }
    }

We need to shoe horn those EmployeeDB fields into our service layer type, and in the following example I show examples of explicit and implicit mapping within the Global.asax:

    <%@ Application Language="C#" %>
    <%@ Import Namespace="WebSite1" %>
    <%@ Import Namespace="System.Web.Optimization" %>
    <%@ Import Namespace="System.Web.Routing" %>
    <%@ Import Namespace="AutoMapper" %>

    void Application_Start(object sender, EventArgs e)
    {
        // Code that runs on application startup
        BundleConfig.RegisterBundles(BundleTable.Bundles);
        AuthConfig.RegisterOpenAuth();
        RouteConfig.RegisterRoutes(RouteTable.Routes);

        CreateMapping();
    }

    private void CreateMapping()
    {
        //Explicitly maps 'First name' and 'Last name' to 'Name'
        //Explicitly maps 'Age' from 'Date of birth'
        //Implicitly matches 'SSN' to 'SSN' no configuration required
        Mapper.CreateMap()
            .ForMember(emp => emp.Name, empdb => empdb.MapFrom(n => n.FirstName + " " + n.LastName))
            .ForMember(emp => emp.Age, empdb => empdb.MapFrom(n => DateTime.Now.Year - n.DOB.Year));
    }

Immediately upon start up this web application Mapper knows how I intend to convert EmployeeDB to Employee:

    protected void Page_Load(object sender, EventArgs e)
    {
        //retrieve from a DB layer
        EmployeeDB employeedb = new EmployeeDB() { 
            DOB = DateTime.Now.AddYears(-42), 
            FirstName = "Dominic", 
            LastName = "Cobb", 
            SSN = "111-11-1111" };

        //employee now has all the converted properties from employee db
        //ready to be used at service layer
        var employee = Mapper.Map(employeedb);
  
    }

Of course you can use this technique in any type of application you like.

Here are a list of some advanced AutoMapping techniques:

Happy mapping people!



June 26, 2013 3:39  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I have written about LINQ in the past and I truly believe it is the gateway to a better way of programming, surprisingly I have never used the Distinct method before this week. So I was rather surprised that for all my efforts simply using the Distinct method would not work even for an object as simple as the following:

public class WordViewModel
{
    public string Word
    {
        get {}
        set {}
    }
}


Basically this is a function of the fact that even for a type as simple this, LINQ really does not know how to use the property ‘Word’ to determine if a type is unique or not. So in order to use the Distinct method effectively we have to tell LINQ how to compare our types use the IEqualityComparer as follows

// Custom comparer for the Product class
class WordComparer : IEqualityComparer
{
    // Words are equal if their names are equal.
    public bool Equals(WordViewModel x, WordViewModel y)
    {
        if (Object.ReferenceEquals(x, y)) return true;

        if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null))
            return false;

        //Check whether the properties are equal.
        return x.Word == y.Word;
    }

    // GetHashCode() should return the same value.
    public int GetHashCode(WordViewModel word)
    {
        if (Object.ReferenceEquals(word, null)) return 0;

        int hashProductName = word.Word == null ? 0 : word.Word.GetHashCode();
        int hashProductCode = word.Word.GetHashCode();

        //Get the hash code for Word
        return hashProductName ^ hashProductCode;
    }
}

So then my simple Distinct method gets passed my customized comparer and works like a charm!

var newlist = new List(
    list.Where(s => !s.Word.StartsWith("Start"))
    .Where(s => s.Word.Length > 5)
    .OrderBy(c => c.Word.Length).ThenBy(s => s.Word))
    .Distinct(new WordComparer());



July 30, 2012 16:00  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I was attempting to display an image from an IO stream for WP7, and for some reason it took me an age to get this in place. So here it is … may I never forget.

    System.IO.Stream str = Utilities.GetImageFromStream();
    if (str != null)
    {
        WriteableBitmap bitmap = Microsoft.Phone.PictureDecoder.DecodeJpeg(str);
        phoneimage.Source = bitmap;
    }


March 7, 2012 1:31  Comments [0]
Tagged in .NET | C# | Programming | Windows Phone
Share on Twitter, Facebook and Google+

portsI was trying to make a couple of small changes to a WCF services recently and I hit a brick wall while trying to update my references to that service in a WP7 app. I was noticing that instead of using the local IP address it was filling in the machine name and then picking a random port for the WSDL declaration.

The screen shot to the left was what I would see. The WCF service was available http://127.0.0.1:81/Services/MyService.svc, however, the WSDL for the service was forced to http://[MACHINENAME]:82/Services/MyService.svc?wsdl. Now I added a host entry for my machine name at 127.0.0.1 but the WSDL is actually at port 82 (and would randomly select other ports). This is not a valid URL (on my machine) even with the service running and would fail all the time. I tried pointing directly to the correct and valid IP/port but all the references inside are porting to the wrong location.


The solution. Define a behavior and attach it to your service, you can use the behavior to lock in the port number as follows:


  
    
      
        
          
        
      
    
  



February 20, 2012 0:18  Comments [1]
Tagged in .NET | ASP.NET | WCF
Share on Twitter, Facebook and Google+

After doing a couple of presentations on the OWASP Top 10 list a couple of years back I *tried* to put together a series of viable demos that would help to illustrate the list in a more practical fashion. Unfortunately I got as far as the first two issues and never completed the exercise.

Recently I stumbled upon a really well written blog by Troy Hunt, a Microsoft MVP, who has a clear passion for  security. Troy has created a series of 10 blog posts that illustrate the OWASP top 10 from the  view of the .NET stack and ultimately makes these topics more accessible to developers like us. If you have a few moments please check out his posts (and his blog in general):

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards


December 14, 2011 23:26  Comments [0]
Tagged in .NET | Security
Share on Twitter, Facebook and Google+

Considering the industry I work in even simple apps with little or no sensitive are subject to every security penetration test available. If you are creating web apps that have access to a data layer you are inevitably confronted with the question of where and how to store your database connection strings (or other sensitive data). Thankfully most of us are miles away from INI files, registry keys and *GULP* hardcoding in the app itself.

The .NET Framework versions 1.x had limited support for configuration file encryption. However, .NET Framework 2.0 introduced a protected configuration feature that you can use to encrypt sensitive settings using a command line. The following two protected configuration providers are provided out the box:

  • RSAProtectedConfigurationProvider - This is the default provider and uses the RSA public key encryption to encrypt and decrypt data.
  • DPAPIProtectedConfigurationProvider - This provider uses the Windows Data Protection API (DPAPI) to encrypt and decrypt data.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <someAppSettings>
        <add key="secret" value="lightoftheworld" />
    </someAppSettings>
</configuration>

Open the Visual Studio 2010 command prompt and navigate to the folder web.config file and run the following command:
aspnet_regiis -pef someAppSettings . -prov DataProtectionConfigurationProvider


This will modify only the portion of the config within the “someAppSettings” section:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <secureAppSettings configProtectionProvider="DataProtectionConfigurationProvider">
        <EncryptedData>
            <CipherData>
                <CipherValue>ASDAASNCMnd.......</CipherValue>
            </CipherData>
        </EncryptedData>
    </secureAppSettings>
</configuration>


You can also perform the encryption\decryption step via code and as shown here:

static void ToggleWebEncrypt()
{
    // Open the Web.config file and get the connectionStrings section.
    Configuration config = WebConfigurationManager.OpenWebConfiguration("~");
    var section = config.GetSection("connectionStrings") as ConnectionStringsSection;

    // Toggle encryption.
    if (section.SectionInformation.IsProtected)
        section.SectionInformation.UnprotectSection();
    else
        section.SectionInformation.ProtectSection("RsaProtectedConfigurationProvider");

    //If you have used the alternate provider this will change as follows:
    //section.SectionInformation.ProtectSection("DPAPIProtectedConfigurationProvider");

    // Save changes to the Web.config file.
    config.Save();
}


ASP.NET automatically decrypts configuration sections when processing them; therefore, you do not need to write any additional decryption code. Be safe out there!



September 15, 2011 1:58  Comments [0]
Tagged in .NET | Security
Share on Twitter, Facebook and Google+

After watching the torturous train wreck of .NET reflector come to its cruel end (i.e. a tool I have to actually pay for, ugh), I am really  delighted by the introduction of version one of ILSpy. The options are kind of straight forward you can view your decompiled code in either straight IL or C# (VB to supported in a future release).

You do have a basic search option that can be filtered by Type or Member, however, there remains no options for integrating plugins but this is version 1, and I am sure that will be in some future road map. Watched this video I know you will recognize and appreciate the effort poured into this project. Developers everywhere please show your support visit the home page and download the tools!



July 22, 2011 3:15  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I have been a huge fan of web services over the years, mainly due to the strong coupling with ASP.NET, the familiar coupling of HTTPContext (request and response) when I needed access to information being sent to the server in the typical HTTP headers. During my recent work with Visual Studio 2010 (.NET 4.0) I immediately started using the Windows Communication Foundation (WCF) projects as the primary means for developing service oriented API for my Windows Phone 7 apps.

One of the first things I realized I missed was the obvious connection to the HTTPContext information (IP Address, Port, etc.). There is, however, a means to get at similar information with scope of a WCF project as follows:

OperationContext op = OperationContext.Current;

MessageProperties mpa = op.IncomingMessageProperties;
var epp = mpa[RemoteEndpointMessageProperty.Name] as RemoteEndpointMessageProperty;

string address = epp.Address;
int port = epp.Port;

These properties were made available to us during version .NET 3.5, I have also had a couple of chances to integrate and HttpRequestMessageProperty which gives us direct access to things like HTTP Method (GET, POST, etc.), Headers and QueryString.



March 20, 2011 19:38  Comments [0]
Tagged in .NET | ASP.NET | C# | WCF
Share on Twitter, Facebook and Google+

I recently did a primer on LINQ for my colleagues at work and I wanted to put my notes in a place where I could get quick access to them … I then remembered that I have a blog for that exact purpose. My notes were designed to cover the basics of LINQ and even more importantly I wanted to cover all the significant changes that made a language integrated query possible in the type safe .NET environment. During the presentation I opted to avoid Power point completely and stick to code samples we could all hack into, and here they are:

1  - ANOTMY OF A Basic Query
string[] ourteam = { "Larry", "Don", "Grishma", "Alex", "Navya", "Ganesh", 
                      "Melvin", "Mark", "Jeff" };

IEnumerable<string> query = from s in ourteam
                            where s.Length == 4
                            orderby s
                            select s;
2a - Extension Methods

Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Extension methods are a special kind of static method, but they are called as if they were instance methods on the extended type. See my previous post for details.

2b - Lambda Expressions

A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. “=>” is the lambda operator, which reads as "goes to".

Func<int, int> f = n => n * 5;
int val = f(5);
Console.WriteLine(val); //Displays 25
2D – Predicates

Represents the method that defines a set of criteria and determines whether the specified object meets those criteria.

s => s.Length == 4

2c - Lambda Expressions in practice
string[] ourteam = { "Larry", "Don", "Grishma", "Alex", "Navya", "Ganesh", 
                      "Melvin", "Mark", "Jeff" };

Func<string, bool> filter = s => s.Length == 4;
Func<string, string> extract = s => s;
Func<string, string> project = s => s.ToUpper();

IEnumerable<string> query = ourteam.Where(filter)
    .OrderBy(extract)
    .Select(project);
3  - What IN the var...

An implicitly typed local variable is strongly typed just as if you had declared the type yourself, but the compiler determines the type.


var a = “string”; //gives a string
var b = 45; // gives an int


4a - More Query Syntax
string[] ourteam = { "Larry", "Don", "Grishma", "Alex", "Navya", "Ganesh", 
                      "Melvin", "Mark", "Jeff" };

IEnumerable<string> query = ourteam.OrderBy(s => s).ThenBy(s => s.Length);
4b - Query Syntax GROUP BY
string[] ourteam = { "Larry", "Don", "Grishma", "Alex", "Navya", "Ganesh", 
                      "Melvin", "Mark", "Jeff" };

var myteamgroup = ourteam.GroupBy(s => s.Length);
foreach (IGrouping<int, string> grouplength in myteamgroup)
{
    Console.WriteLine(String.Format("String Length: {0}: ", grouplength.Key));
    foreach (string names in grouplength)
    {
        Console.WriteLine(String.Format("String Length: {0}: ", names));
    }
}
4c - Query Syntax AGGREGATE
string[] ourteam = { "Larry", "Don", "Grishma", "Alex", "Navya", "Ganesh", 
                      "Melvin", "Mark", "Jeff" };
int val = ourteam.Sum(s => s.Length);
Console.WriteLine(val); //total length of all strings
5  - JOIN
class ScoreCard { public int Key; public int Score; }
class Member { public int Key; public string Name; }

class Program
{
    static void Main(string[] args)
    {
        var teamscore = new List<ScoreCard>()
        {
            new ScoreCard {Key = 0, Score = 3 },
            new ScoreCard {Key = 1, Score = 12 },
            new ScoreCard {Key = 2, Score = 67 },
            new ScoreCard {Key = 3, Score = 67 },
            new ScoreCard {Key = 4, Score = 82 },
            new ScoreCard {Key = 5, Score = 100 },
        };

        var ourteam = new List<Member>()
        {
            new Member {Key = 0, Name = "Larry" },
            new Member {Key = 1, Name = "Don" },
            new Member {Key = 2, Name = "Grishma" },
            new Member {Key = 3, Name = "Alex" },
            new Member {Key = 4, Name = "Navya" },
            new Member {Key = 5, Name = "Melvin" },
            new Member {Key = 6, Name = "Mark" },
            new Member {Key = 7, Name = "Jeff" },
        };

        var myteamjoin = from ts in teamscore
                         join ot in ourteam on ts.Key equals ot.Key
                         select new { ot.Name, ts.Score };

        foreach (var mem in myteamjoin)
        {
            Console.WriteLine("Name: {0}, Score {1}", mem.Name, mem.Score);
        }
    }
}


December 16, 2010 3:59  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I stopped doing meaningful Windows development many moons ago, however, I was recently attempting to setup a simple test application that would send and receive data via a URI. I was taking advantage of existing code and missed the opportunity to do it the right way so I am taking the opportunity to do it the right way here. In this example (inspired by Scott Gu) I am using the WebClient class and the Twitter API. I am using DownloadString*, however, there is also a DownloadData* which returns a byte array for processing.

WebClient mt = new WebClient();
mt.DownloadStringCompleted += new DownloadStringCompletedEventHandler(mt_DownloadStringCompleted);
mt.DownloadStringAsync(new Uri("http://api.twitter.com/1/statuses/user_timeline.xml?screen_name=" + _name));


I am primarily using .NET 1.1 at work and so my options would be limited in terms of processing the response. As I am currently testing out Visual Studio 2010 Express, we can and will use LINQ to SQL to process the XML response.

void mt_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
    if (e.Error != null)
        return;

    XElement tweets = XElement.Parse(e.Result);
    var sometweets = from twt in tweets.Descendants("status")
                     select new Tweet
                     {
                         ImageSource = twt.Element("user").Element("profile_image_url").Value,
                         Message = twt.Element("text").Value,
                         UserName = twt.Element("user").Element("screen_name").Value
                     };
}
public class Tweet
{
    public string UserName { get; set; }
    public string Message { get; set; }
    public string ImageSource { get; set; }
}

 

That is it really, if you need to pull string or binary data from a URI in the Windows world WebClient is your class in the world of .NET. An opportunity missed but I feel better for having got it off my chest here.

Related Links:



May 24, 2010 1:34  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I am a huge fan of WPF and more specifically Silverlight so when my wife took the opportunity to incorporate Expression Blend into a recent project I was eager for her to give the new Sketch Flow pattern a whirl. The Sketch Flow concept came across as the ability to mock up a complete application or website and transform that into a working production model. Effectively narrowing the gap between the client, designer and the developer. After working with my wife over the last couple of months it is clear that this particular idea was misrepresented.

My wife had created a prototype using the Sketch Flow concept and put together a very compelling, and partially functional, application. Unfortunately that fact that project was in fact Sketch Flow means that you just cannot simply open this application up in Visual Studio and start doing the complex software engineering work. In fact there a bunch of hoops (albeit well documented in Blends User Guide) that need to be followed in order to make this happen:

image 

It is just clear to me that the role of the Devigner (Designer and Developer amalgam) is not a cohesive as advertised.



December 14, 2009 3:09  Comments [0]
Tagged in .NET | Silverlight | Visual Studio | WPF
Share on Twitter, Facebook and Google+

This is the first in a series of posts that I wanted to dedicate to my own coding errors, the situations where I forget some fundamental programming concept and it has me confused for minutes, hours and\or days. So I will attempt to lay bare my own gaffs and hope that this blog serves as a permanent reminder that I remain irrevocably human.

So my latest brain freeze involved an attempt to filter an Array List  based on its content (using .NET 1.1), the basic premise was as follows…

ArrayList ls = new ArrayList();
ls.Add("This");
ls.Add("is");
ls.Add("a");
ls.Add("Test");

foreach (string s in ls)
{
    if(s == "is")
        ls.Remove(s);
} 

The more discerning programmers among us will immediately realize that this for each loop will keep iterating right up until we actually remove something and then we will get this error…

System.InvalidOperationException was unhandled
  Message="Collection was modified; enumeration operation may not execute."

Simply put we cannot modify collection while iterating through it, pretty obvious when you actually take the time to read the exception ;) There are several ways to resolve this, we could create a new ArrayList that we simply add to, however, I preferred using for loops, feels closer to C.

for (int i = 0; i < ls.Count; i++)
{
    if((string)ls[i] == "is")
        ls.RemoveAt(i);
}

I will keep going with a list of embarrassing Faux Pas until I stop committing them, that is to say, until I stop programming ;)

Technorati Tags:



December 7, 2009 2:09  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

This topic has been done to death but I see the occasional errors in code that are directly related to a misunderstanding of “shallow” vs “deep” copies of reference types. Putting it here provides an outlet for my thoughts and a link I can point developers to so I do not have to repeat myself.

A shallow copy is best illustrated by the following diagram. Effectively both variables A and B are pointing at the same memory address, so any updates to A are reflected in B and vice versa:

Here is some code that performs a shallow copy:

MyClass c1 = new MyClass();
MyClass c2 = c1; // A shallow copy is performed here

c1.SC.SomeValue = "25";
c1.ST = "45";

//c2 will reflect the changes made to c1 above, trust me.
MessageBox.Show(String.Format("SomeValue: {0} : ST: {1}",c2.SC.SomeValue,c2.ST));

In C# shallow copies can explicitly performed by a MemberWiseClone and can be completed as follows:

MyClass c2 = c1.MemberWiseClone();

In contrast a deep copy creates and entirely new memory space from which the variables are referenced as show:

In order to provide deep copy capabilities you can use the ICloneable interface and make your class Serializable (I am sure there are other ways to do this).

public object Clone()
{
    MemoryStream ms = new MemoryStream();
    BinaryFormatter bf = new BinaryFormatter();
    bf.Serialize(ms, this);
    ms.Position = 0;
    object o = bf.Deserialize(ms);
    ms.Close();
    return o;
}

So now the code for completing a deep copy would look something like this…

MyClass c1 = new MyClass();
MyClass c2 = (MyClass)c1.Clone();

c1.SC.SomeValue = "25";
c1.ST = "45";

//the message below will have empty strings as c2 was never initialized.
MessageBox.Show(String.Format("SomeValue: {0} : ST: {1}",c2.SC.SomeValue,c2.ST));

Please do not make the assumption that all ICloneable implementations are deep, Framework design guidelines were very vague and developers are notoriously inconsistent. You have been warned.

Technorati Tags: ,


August 14, 2009 4:03  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

In a quest to get to know the .NET library just a little bit I wanted to push on into the System.Math class just a little further. Having done quite a lot of math in my college days most of the methods in Math were pretty obvious and so I will forgo going over the obvious ones (cos, tan, etc) here.

There were several that were not immediately recognizable to me for a variety of reasons, it has after all been 10 or more years since I have had any serious mathematic problems to deal with. Also the names of the methods were not as intuitive as they might have been if not for the constraints of method naming.

System.Math.IEEERemainder(x, y) – Gives the remainder of the division as defined by IEEE.
System.Math.BigMul(a, b) – Gives the full product of two 32 bit numbers as Int64 (long).
System.Math.Pow(x,y) – Raises one number to the power of another. Not sure why they did not use the full “Power” word, I probably would have known it right away.
System.Math.E – This is simply a natural log, again I would have figured this out but the natural log is usually represented by a lower case “e” in Math.

What class or namespace should I look at next … I think System.Text … I reread the Joel on Software article on Unicode.

 

Technorati Tags: ,


June 23, 2009 1:00  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I have just completed 10 day attack on Windows Presentation Foundation and after repeated false starts I have concluded I need to buy a book. In the age of Google search this may seem absurd to the average developer, however, after 8 days of trying to recreate a Windows Forms App (the other 2 days were actually useful) I am realizing I need to have a paradigm shift in my current mindset. Let us take the humble ListBox for example, the XAML would look something like this:

image        <ListBox>
            <ListBoxItem>Item 1</ListBoxItem>
            <ListBoxItem>Item 2</ListBoxItem>
            <ListBoxItem>Item 3</ListBoxItem>
            <ListBoxItem>Item 4</ListBoxItem>
        </ListBox>


So I thought that was it I *know* List Boxes in WPF … WRONG! This is not even scratching the surface my friends! Our capable WPF designers have broken the ListBox into its simplest forms. For example let us trying creating a list of Rectangles

image        <ListBox>
            <Rectangle Width="20" Height="20" Stroke="Yellow"  StrokeThickness="4" ></Rectangle>
            <Rectangle Width="20" Height="20" Stroke="Blue"  StrokeThickness="4" ></Rectangle>
            <Rectangle Width="20" Height="20" Stroke="Green"  StrokeThickness="4" ></Rectangle>
        </ListBox>

 

or ellipses…

image

        <ListBox>
            <Ellipse Width="20" Height="20" Stroke="Yellow"  StrokeThickness="4" ></Ellipse>
            <Ellipse Width="20" Height="20" Stroke="Blue"  StrokeThickness="4" ></Ellipse>
            <Ellipse Width="20" Height="20" Stroke="Green"  StrokeThickness="4" ></Ellipse>
        </ListBox>

 

…or any combination of the above. It is a type agnostic list of items you want to show. I can also orient the flow of list boxes as I see fit…

image

        <ListBox VerticalAlignment="Top">
            <ListBox.ItemsPanel>
                <ItemsPanelTemplate>
                    <VirtualizingStackPanel Orientation="Horizontal" IsItemsHost="True"/>
                </ItemsPanelTemplate>
            </ListBox.ItemsPanel>
            <Ellipse Width="20" Height="20" Stroke="Yellow"  StrokeThickness="4" ></Ellipse>
            <Ellipse Width="20" Height="20" Stroke="Blue"  StrokeThickness="4" ></Ellipse>
            <Ellipse Width="20" Height="20" Stroke="Green"  StrokeThickness="4" ></Ellipse>
        </ListBox>


Can you see why I need a book, at a casual glance I thought I really knew what was going on with a simple drag and drop, but on closer inspection it is clear that I am missing some underlying goodness through my own poor ignorance.

WPF rocks!!!

Technorati Tags: ,


March 27, 2009 17:09  Comments [1]
Tagged in .NET | WPF
Share on Twitter, Facebook and Google+

I was investigating and en error recently with DateTime.ToString() that gave produced the following call stack:

System.FormatException: Input string was not in a correct format.

at System.DateTimeFormat.GetRealFormat(String format, DateTimeFormatInfo dtfi)

at System.DateTimeFormat.ExpandPredefinedFormat(String format, DateTime& dateTime, DateTimeFormatInfo& dtfi)

at System.DateTimeFormat.Format(DateTime dateTime, String format, DateTimeFormatInfo dtfi)

at System.DateTime.ToString(String format, IFormatProvider provider)

at MyWork.NewMethod(string a, string b)

I was doing the following…

DateTime.ToString(“o”)

…which is a valid input string for 2.0 and above but not 1.1. This would not have been issue but the test PC I ran the offending assembly on had both 1.1 and 2.0. The ToString() method took the path of least resistance and used the framework dependencies under 2.0 and not 1.1. Of course the real problem arises when you try to rerun the assembly on a machine that only has .NET 1.1, at this point the “o” option is invalid.

 

Took me a few moments to track down, and reminded me never to trust MSDN (defaults to .NET 3.x) without verifying the backward compatibility of the options.

 

 



March 12, 2009 22:16  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

It has been a while since I have done significant UI work for a Windows application, however, the last product I remember developing we broke almost every rule in the book with regards to storing local data on the PC. The main reason for this level of contravention was that Windows NT, and subsequently, Windows 2000 never complained. Couple that with the fact that everyone basically ran as administrator led to programmer apathy and a lack of Windows compatibility.

With the advent of Vista and the persistence of UAC we have been forced to collectively understand what we should and should not do when it comes to reading and writing application specific data, and trust me this will not go away in Windows 7. Here are my top 3 bad data storage decisions for Windows:

  • Storing User and Application data in the Program Files – Now I am not talking about installation type data, but I am referring to user specific information that probably will not be shared.
  • Reading and Writing from System32 – There may be a few edge cases where a read is needed but a write?
  • Writing to Global registry settings – enough said…

.NET Framework has a really convenient enumerated type, Environment.SpecialFolder, which is used in Environment.GetFolderPath

Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData + “\AppName”)

Member Name Description
MyDocuments The "My Documents" folder.
MyComputer The "My Computer" folder.
MyMusic The "My Music" folder. 
MyPictures The "My Pictures" folder.
ApplicationData The directory that serves as a common repository for application-specific data for the current roaming user. 
CommonApplicationData The directory that serves as a common repository for application-specific data that is used by all users. 
LocalApplicationData The directory that serves as a common repository for application-specific data that is used by the current, non-roaming user. 
Cookies The directory that serves as a common repository for Internet cookies. 
Desktop The logical Desktop rather than the physical file system location. 
Favorites The directory that serves as a common repository for the user's favorite items.
History The directory that serves as a common repository for Internet history items. 
InternetCache The directory that serves as a common repository for temporary Internet files.
Programs The directory that contains the user's program groups.
Recent The directory that contains the user's most recently used documents.
SendTo The directory that contains the Send To menu items. 
StartMenu The directory that contains the Start menu items. 
Startup The directory that corresponds to the user's Startup program group.
System The System directory.
Templates The directory that serves as a common repository for document templates.
DesktopDirectory The directory used to physically store file objects on the desktop.
Personal  The directory that serves as a common repository for documents.
ProgramFiles The program files directory.
CommonProgramFiles The directory for components that are shared across applications.



Here are a list of the members that will get you quick access to various locations on your PC, I find the top 7 or so the most useful and UAC appropriate. Here are a couple of links on Windows Compatibility and Configuring User Access Control.

Technorati tags: ,


January 23, 2009 10:22  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

If you do any significant web development you will have probably seen this error messages, which generally means you will be able to open the project.

image

After seeing this error message this generally means that the VSWebCache (a folder that is found in C:\Documents and Settings\user.name\VSWebCache) is out of synch and the best thing to do is simply delete it. After going to this folder I simply could not find VSWebCache.

Up until today I thought this folder was found in documents and settings by default but this is actually controlled by the following Web Settings.

image

Technorati tags:


September 19, 2008 2:04  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I have written a couple of app's that use LINQ, however, I realized that in order to talk about LINQ I would really have to have a brief look at the ICollections pattern and extension methods.

So extension methods are a really neat way to spot weld additional methods to existing classes in the .NET Framework. So for example, I recently talked about the HttpUtility.UrlEncode being buried in the System.Web namespace. Let us say that I wanted the UrlEncode method to be available where ever I use the String type, then I could do the the following:

    public static class MDUtilities
    {
        //Indicates that this method should be associated string
        public static string EncodeMyUrl(this string s) 
        {
            return System.Web.HttpUtility.UrlEncode(s);
        }
    }

So by including the above class definition you now have a supercharged version of the string type as shown below.

extension method

Extension methods are littered throughout .NET 3.5 Framework and specifically in the ICollections pattern. These extension give us all the access to the LINQ goodness that we are starting to enjoy. So you can imagine somewhere in the .NET Framework would be a definition above that spot welds the new LINQ methods to ICollections.



June 24, 2008 11:31  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

Automatic updates has really helped XP, and subsequently 2003 and Vista, stay on top of the mounting attacks directed against PC users. Whenever I am called upon by relatives and friends to fix their PC's this is the first gift I always give them. My windows workstation is still using XP I have opted not to update to Vista, even though I have a copy sitting in my cupboard, because there remains some continuing compatibility issues with some of the Music software I use. Due to the critical nature of this workstation I am strict about not letting it connect to the Internet on a regular basis. I try to conduct the updates once a month to ensure that everything is as it should be.

I noticed that there have been several non critical updates that I have had queued for a while, and upon further investigation I realized that they have been failing. All the failures were related to the .NET framework and their associated service packs.

.NET Framework Error Message (SL4.TMP)

So I am thinking that there maybe an issue with the Automatic Updates delivery system and so I proceed to download the .NET Framework SP (1.1 and 2.0) in the hope to circumvent the problem. I end up being give this useless error message to the left which tells me absolutely nothing.

Seriously what were they expecting me to glean from this, what is SL4.tmp? how does it relate to my problem? Is the file corrupted or missing? Where is it supposed to be? All the bits of useful information I was hoping for did not seem to be given when the failure occurred.

At this point I am forced to perform a Google search on it but I come up with nothing! My next step is to try uninstalling the entire framework both 2.0 and 1.1, but unfortunately neither the remove or repair options for either framework appears to be functional. This new error appears related to DOTNETFX.MSI, which does not appear to exist on my machine ... aaargh!!

I performed a series of searches on the topic of .NET installation errors and come across the blog of Aaron Stebner who has developed a tool for cleaning up installations of the .NET framework (called the Installation Cleanup Utility) . He clearly indicates on his blog that this should be considered a last resort, and that is certainly the position I found myself in, so I took the leap of faith and downloaded the tool.

Well it worked! The Cleanup Utility is easy to use and allows you to select which framework you are having problems with, or  select them all. It produces a log file indicating what changes were made (this may include registry updates so you will want to make a back up before proceeding). All in all it took about 20 seconds to fix the problem at which point all the Automatic updates started to run successfully! Another useful tool that is free!!

Installation Cleanup Utility

Before you decide to use this tool you should consider the following:

There are a couple of very important caveats that you should read before using this tool to cleanup .NET Framework bits on your machine:

  1. You should try to perform a standard uninstall first.  This tool is not designed as a replacement for uninstall, but rather as a last resort for cases where uninstall or repair did not succeed for unusual reasons.
  2. This cleanup tool will delete shared files and registry keys used by other versions of the .NET Framework.  So if you use it, be prepared to repair or reinstall any other versions of the .NET Framework that are on your computer to get them to work correctly afterwards

 

 

 

 

 



June 1, 2008 20:06  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

There are a great set of troubleshooting labs for .NET, seven in all, over at Tess' blog. It deals with different troubleshooting scenarios and provides step by step instructions for dealing with each of them. They do not take very long to set up and go through, and it really helps identify some of the core debugging issues.

 

Tess is a Microsoft employee who has been blogging about .NET debugging for years and she really has a great talent for troubleshooting.

 

Technorati tags:


April 4, 2008 2:35  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I was working on some code that would provide a means to quickly tell if the application that you are starting is actually already running, actually it was for the CommonRSS application that I am working. I have done similar things in Visual Basic 6 but I do not remember ever trying this in C#. I am wondering if there is a much cleaner way of doing this ... any thoughts?

using System.Diagnostics;

private const string APPLICATION_NAME = "CommonRSS";

public static bool IsAppAlreadyRunning
{
    get
    {
        bool isAlreadyRunning = false;
        Process currentProcess = Process.GetCurrentProcess();
        Process[] processes = Process.GetProcesses();
        foreach (Process process in processes)
        {
            if (currentProcess.Id != process.Id)
            {
                if (APPLICATION_NAME == process.ProcessName)
                {
                    isAlreadyRunning = true;
                    break;
                }
            }
        }
        return isAlreadyRunning;
    }
}

Technorati Tags:



March 20, 2008 2:31  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I am really trying to nail down my understanding of LINQ but with the current work load, MCP study (yes I am riding that train again) and trying to move into a new home, I am left with very little spare time. I would really love to get into the whole XNA development thing, however, it is tough to justify even more time in front of the PC that will ultimately result in my wanting to purchase an Xbox 360.

This is a very cool demo of the process of creating PC\360 games using the XNA framework. Unfortunately this is as close as I will get.


Video: 3D XNA From Scratch: 10 How To Access The Guitar Controller

When I was much younger I had access to a ZX spectrum (PC from the 80's, not sure if this made it state side) and was able to hack into the code of a Soccer management game (written in some derivative of BASIC), it always fun to give my self limitless funds! I would love to be able reverse engineer some the games they have out there now!

Technorati tags: ,


February 28, 2008 6:06  Comments [0]
Tagged in .NET | XBox
Share on Twitter, Facebook and Google+

I have been struggling with why LINQ (Language Integrated Query) was even useful for the last few weeks, and then finally the penny dropped. While I could go into an elaborate explanation of it inner wonders, I would prefer to share with you the source of my epiphany. I stumbled across a great video from Charlie Calvert that includes a interview with Anders Hejlsberg (Chief architect of C#).

At first I was looking into LINQ as a replacement for direct SQL query access, or just another way to write or access XML, however, LINQ is much more. It is a common way to link (pardon the pun) all of the above and more. I was able to really grok the concept as I listened to Anders speak. It really figures to join our various data sources and the programming world in a very intuitive way. It has a very specific nomenclature to describe data relationships much the same way SQL does (select, where, group by, etc).

Technorati tags: ,



February 24, 2008 0:11  Comments [2]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

Over the last year or so I have been working predominantly with ASP.NET and for that reason alone I have tended to not have to deal directly with the GAC and some of its finer benefits (specifically side by side execution). So I thought I would go over one of the benefits of bindingRedirect.

Obviously the prerequisite to installing assemblies in the GAC include creating strong name key file, associating it with the assembly and installing it in the GAC, and so I created two version of an assembly.

image

In my test application I configured and tested against 1.1.0.0, and so based on that fact, the test application would call 1.1.0.0 . In order to ensure that I can use version 1.2 at will  I needed to create and\or modify the App.Config file to use the bindingRedirect element as follows.

<configuration>
    <runtime>
            <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
                    <dependentAssembly>
                            <assemblyIdentity name="TestLib"  /> 
                            <bindingRedirect oldVersion="1.1.0.0" newVersion="1.2.0.0"/>
                    </dependentAssembly> 
            </assemblyBinding>
    </runtime>
</configuration>

If you are dealing with multiple applications that need to be upgraded, or indeed downgraded, you should also consider using a Publisher Policy File or modifying the Machine Config File.



February 16, 2008 0:01  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I was proposing a solution to a problem today and realized that I had an invalid assumption about the nature of the Array.BinaySearch function so I ran a quick test to confirm and verify my ignorance. So I defined the following test:

            string _mystring = "8,2,3,4,5,6,7,1"; //notice that 1 is not in numerical order
            string [] myarray = _mystring.Split(',');
            
            int result;

            for (int i=1; i < 9; i++)
            {
                result = Array.BinarySearch(myarray, i.ToString());
                Console.WriteLine(string.Format(
			"The value of {0} has been found at {1}.",i,result.ToString()));
            }

            Console.ReadLine();

This resulted in the following output, clearly it could not find "1" or "8" even though they are at the beginning and the end of the array ... herein my fundamental floor was ultimately expressed and the anomaly revealed.

image

The simple problem here is that BinarySearch only works on arrays that are sorted! How did I forget that?!?!?

So this serves as an official note to self, when you are using BinarySearch on an unsorted array perform an Array.Sort:

            string _mystring = "8,2,3,4,5,6,7,1"; //notice that 1 is not in numerical order       
            string [] myarray = _mystring.Split(',');
            Array.Sort(myarray);

Doh!

Technorati tags:



February 14, 2008 3:00  Comments [2]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I love using enumerators! API developers should do all they can do to include them in method signatures so the API consumer can take advantage of all the wonders of intellisense.

I have recently found a pitfall that API developers have fallen into repeatedly when it comes to the use and type safety of enumerators in method signatures. If we consider the following enumerator and also a method that uses it, we can assume that the developer intended to use Direction and only have 4 valid values associated with it.

        public enum Direction
        {
            North,
            South,
            East,
            West
        }
        public static void WhichDirection(Direction number)
        {
            //Here we need to process values 0,1,2,3 as defined by 'Direction' 
            //anything else is not valid
        }

What I have noticed is that in this scenario API developers can assumes that the WhichDirection method requires no additional type checking as they have a defined enumerator. However, the following examples shows how the API consumer can abuse the method and its expected input.

  WhichDirection(Direction.North);    //expected input
  WhichDirection(Direction.South);    //expected input
  WhichDirection(Direction.East);     //expected input
  WhichDirection(Direction.West);     //expected input
  WhichDirection((Direction)59);      //a little skulduggery, that the compiler will not catch!

Clearly WhichDirection needs to be able to detect this scenario and react appropriately. If you have defined only 4 enumerated values and that is literally all that should be passed then the following is one way to safely check for that.

        public static void WhichDirection(Direction number)
        {
            if (Enum.IsDefined(typeof(Direction), number))
            {
                //
            }
            else 
            {
                throw new ArgumentOutOfRangeException();
            }
        }

The key here is not assume enumerators just keep you safe from the outside world, enumerators provide API consumers with a great pattern to follow. API developers need to assume that there is malicious (or ignorant) intent when developers consume their API.

n.b. IsDefined may not always be the best way to solve this problem, I believe it uses reflection and is notoriously slow.

Technorati tags: ,


February 10, 2008 13:30  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

As I become more and more comfortable with Vista the number of unsigned software that I am using is starting to grate on my nerves. I am not sure if I am more annoyed with the developers or the OS that keeps reminding me (even when I tell it not to remind me).

image

There is quite comprehensive article on signing of .NET applications, however, what the article does not go into is that getting a valid certificate from a 3rd party is not free. I can create a test certificate, but the application will have a test certificate that is not verifiable. What, pray tell, is the point of that!

image

The perfect scenario for me is to use a Uri that I own as the point of verification, that way I can have and own a certificate that is publicly accessible! So as long as you believe that my site is mine, by extension you can trust that I own and control the certificate at that Uri!

Please submit all ideas for this scenario here!

Technorati Tags: ,


January 24, 2008 21:48  Comments [0]
Tagged in .NET | Security | Windows
Share on Twitter, Facebook and Google+

My plan with CommonRSS was to have it run as a regular application (not a service, just did not want to be bothered yet). So one of the first problems that occurred to me was how you go about starting an application when the OS started. I also wanted to enable\disable that functionality through the registry so this is the class I came up with to support that. 

using Microsoft.Win32;
using System.Reflection;

static class UtilityFuncs
{
    private const string APP_NAME = "CommonRSS"; //replace this with the name of your application
    private const string REG_ENTRY = "Software\\Microsoft\\Windows\\CurrentVersion\\Run";

    /// <summary>
    /// Enables auto start
    /// </summary>
    public static void EnableAutoStart()
    {
        RegistryKey key = Registry.CurrentUser.CreateSubKey(REG_ENTRY);
        key.SetValue(APP_NAME, Assembly.GetExecutingAssembly().Location);
    }

 

    /// <summary>
    /// Disables autostart.
    /// </summary>
    public static void DisableAutoStart()
    {
        RegistryKey key = Registry.CurrentUser.OpenSubKey(REG_ENTRY);
        if (key != null)
        {
            string val = (string)key.GetValue(APP_NAME);
            if (val != null)
                key.DeleteValue(APP_NAME);
        }
    }

 

    /// <summary>
    /// Checks if auto start is enabled.
    /// </summary>
    public static bool IsAutoStartEnabled
    {
        get
        {
            RegistryKey key = Registry.CurrentUser.OpenSubKey(REG_ENTRY);
            if (key == null)
            {
                return false;
            }

            string val = (string)key.GetValue(APP_NAME);
            if (val == null)
                return false;

 

            return (val == Assembly.GetExecutingAssembly().Location);
        }
    }

}

Technorati Tags: ,


January 14, 2008 15:51  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I am currently working on a small tool designed to access the Common RSS list found within the Windows OS. It helps me because I have grown accustomed to using Internet Explorer for my RSS fix, however, there is currently no system wide desktop alert that will let me know if there any new feeds (that I know of).

 image

This simple application is designed to check if any new feeds are available, give the user an indication within the system tray and then allow the user to launch IE, the user can the review the new feeds from the Favorites Center!

 image

While there are a plethora of great RSS tools out there (better than this one), it has been a couple of years since I have done any continuous Forms based Windows programming so I wanted to throw my hat into the ring and shake the rust off. This will also give me the chance to have a closer look at Microsoft Visual C# Express 2008.

Technorati Tags: ,

p.s. This tool is only useful in XP and below. There are sidebar gadgets that do the same thing for Vista!



January 11, 2008 8:39  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I was looking at the DataSet object the other day and realized that there is both a Copy and a Clone methods. They are designed to fulfil similar functions which the following code highlights.

The Clone method is designed to only copy the structure i.e. the schema, relations and constraints.

The Copy method is designed to replicate the structure and the associated data.

DataSet data1 = new DataSet();
DataSet data2 = new DataSet();

System.IO.FileStream fs = new System.IO.FileStream(@"C:\test\test.xml", System.IO.FileMode.Open);
data1.ReadXml(fs);

data2 = data1.Clone();
foreach (DataRow dr in data2.Tables[1].Rows)
{
    Console.WriteLine(dr[0].ToString()); // We called the Clone method
}

 

data2 = data1.Copy();
foreach (DataRow dr in data2.Tables[1].Rows)
{
    Console.WriteLine(dr[0].ToString());
}

 

Technorati tags: , ,


January 1, 2008 19:11  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

After a fielding a few questions about security in some recent projects, I was looking at couple of ways that security is handled within the .NET framework. I wanted to figure out how you could define, method by method, whether a user had permission to run a method within their security context..

The two methods I focused on are Windows Principal and Principal Permission.

Windows Principal
At a basic level we could implement code that verifies what role the current user is based upon. This method is clean and simple! Throw this at the front of each method and your golden ... but that is not very elegant.

WindowsIdentity ident = WindowsIdentity.GetCurrent();
WindowsPrincipal user = new WindowsPrincipal(ident);
if(user.IsInRole("Admin")){
    //Do stuff here...
}

The Principal Permission
In the following example we have applied PrincipalPermissionAttribute which declaratively requires the user running the code to belong to a specific role or to have already have been authenticated. I learned the hard way that you also need to explicitly set the Principal Policy before calling the method or class with a permission attribute.

    using System.Security;
    using System.Security.Permissions;
    using System.Security.Principal;
    using System.Threading;
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);

                MyTest mt = new MyTest();
                Console.WriteLine(mt.GetMessage());
            }
            catch(SecurityException ex)
            {
                Console.WriteLine(ex.Message);
            }
            Console.ReadLine();
        }
    }

    class MyTest
    {
        public MyTest() {Console.WriteLine("Start MyTest"); }

        [PrincipalPermissionAttribute(SecurityAction.Demand, Name = @"Domain\Admin")]
        public string GetMessage()
        {
            return "My Message";
        }
    }


December 23, 2007 1:51  Comments [0]
Tagged in .NET | Security
Share on Twitter, Facebook and Google+

The idea behind collections is concurrently convenient and fraught with potential conversion problems. Collections are designed to store all things given to them as objects and therefore filling a collection is cake.

 

For example:

ArrayList MyList = new ArrayList();

MyList.Add(8);

MyList.Add(9);

MyList.Add("10"); //wait this is a string ... but it is perfectly legal as everything is stored as an object.

 

The fundamental idea is you can place multiple variable types in a single collection conveniently, without regard for all that is type safe or type sane. The problem is ultimately revealed when we unwrap our collection and expose it to the real type safe world of .NET programming.

 

We end up with:

int int8 = (int)MyList[0];

int int9 = (int)MyList[1];

int int10 = (int)MyList[2]; //but wait this is a string!

…but the compiler can never know because collections are all objects and this will not become apparent until run time (usually in QA or worse yet in Production).

 

This simple example now requires the programmer to continually track what types are associated with which element of the ArrayList which invariably leads to complex and custom type checking solutions.

 

.NET 2.0 to the rescue:

Our friends at Microsoft have provided us with a Generic List (Systems.Collections.Generic) that can be type safe. Our code now becomes

List<int> MyNewList = new List<int>() //<int> tells the compiler that this is a generic collection of type int.

MyNewList.Add(8);

MyNewList.Add(9);

MyNewList.Add("10"); \\this is still a string but this can be caught immediately

The problematic line of code can now be caught safely by the compiler before ever going to QA, production, or even you own unit testing.

Technorati tags:


December 6, 2007 1:48  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

After Robert mentioned that we will be moving to a newer version of the .NET framework, I wanted the opportunity to look at some of the language improvements of .NET 2.0 specifically the use of nullable types.

 

There are two main types that we generally deal with, reference and value types. These two types are treated and stored differently and as a result we have specific behaviors that go along with these types. The main difference I would like to highlight is the fact that value types cannot be null.

So for example:-

int x = 0; //legal
int x = null; // not so legal "Cannot convert null to 'int' because it is a value type"
string s = "hello"; //legal
string s = null; //legal

The surface response is that integers are never null ... but development with XSD and Sql Server allows  us to have nullable integers. So there are legitimate reasons to have a strong solution for this problem of null values.

 

The first way we could do this is to actually create a class that wraps round the integer type. This class would support a series of methods that would checks for Null values. I saw a great VB 6 developer do a similar thing. However, this approach is made redundant in .NET 2.0.

 

Nullable Template:
The more appropriate way to do this would be to use the new Nullable struct This gives us the following:

Nullable<int> i = null; //legal

We can also use HasValue to determine whether the current value is null as follows:

Nullable<int> i = null
if(i.HasValue)  //(also i==null would work)
{
    int j = (int)i;
}

This can be rewritten as follows:

int? i = null; // int ? => nullable int
int j = i ?? 0;

Check the last two snippets of code out in ILDASM. It really amounts to the same thing.

Technorati tags:


December 4, 2007 1:45  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

Microsoft's online MSDN documentation has, for some time, offered a very intuitive way of discovering information about various classes and namespace via direct input into the URL. Effectively you can guess what the location of things you want to look up.

I can for example type the following URL in my browser http://msdn2.microsoft.com/en-us/library/system.io, and get directed to the System.IO namespace documentation. Similarly I can get directly to information on classes by using the same technique. http://msdn2.microsoft.com/en-us/library/system.data.odbc.odbcerror, takes me directly to OdbcError class.

Technorati tags:


December 4, 2007 1:43  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I am always interested in writing one or two lines less and I saw a technique employed quite a lot when using Switch statements in C#. Directly below is the way I have always written the statements. Basically the execution line of the first two statements are identical, however, the case line is quite different.

XmlTextReader xr = new XmlTextReader(@"C:\test\test.xml");
while (!xr.EOF)
{
    switch (xr.NodeType)
    {
        case XmlNodeType.EndElement:
            Console.WriteLine(xr.Name.ToString());
            break;
        case XmlNodeType.Element:
            Console.WriteLine(xr.Name.ToString());
            break;
        case XmlNodeType.Text:
            Console.WriteLine(xr.Value.ToString());
            break;
    }
    xr.Read();
}

As a short cut you can completely eliminate the first line of execution and it associated "break;" statement as follows. Now both lines will execute the same line of code:

XmlTextReader xr = new XmlTextReader(@"C:\test\test.xml");
while (!xr.EOF)
{
    switch (xr.NodeType)
    {
        case XmlNodeType.EndElement:
        case XmlNodeType.Element:
            Console.WriteLine(xr.Name.ToString());
            break;
        case XmlNodeType.Text:
            Console.WriteLine(xr.Value.ToString());
            break;
    }
    xr.Read();
}

I did not like this shortcut for case statements at first, but I realized that it was due to a small portions of my mind that still thinks in VB6. I do not believe there is a "break;" necessary in VB6. In fact the Select Case statement in VB 6 executes only one of several statements based on the value of the expression.

Technorati tags: ,


December 1, 2007 1:41  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I have been messing with a few of the more basic principles of reflection recently. The main points include pulling out a list of types from a given assembly and then reviewing the members of those specific Types.

        public static void ReviewTypes()
        {
            // Checking out a specific assembly
            Assembly a = Assembly.Load("Mscorlib.dll");
            Type[] mytypes = a.GetTypes();
            foreach (Type ty in mytypes)
            {
                Console.WriteLine("Type is {0}", ty);
            }

            Console.WriteLine("{0} types have been located", mytypes.Length);
        }

        public static void ReviewMembers()
        {
            // examine a single type
            Type thisType = Type.GetType("System.Reflection.Assembly");
            Console.WriteLine("This Type is {0}", thisType);

            // get all the associated members
            MemberInfo[] mbrInfoAll = thisType.GetMembers();
            foreach (MemberInfo mbrInfoSelect in mbrInfoAll)
            {
                Console.WriteLine("{0} is a {1}", mbrInfoSelect, mbrInfoSelect.MemberType);
            }
            
        }

This type of granular interrogation of types\members at runtime can be very helpful!

 

Technorati tags: , ,


November 29, 2007 1:37  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

Having developed in C and VB6 at the start of my career there is a definite habit of creating utility modules that help with the manipulation of strings or processing arrays. I have noted from time to time that this has made me neglect some of the in built language features of my primary language, C#.

My first instinct when dealing with an array in C# is to loop through it to create and find strings. I am placing both of these string\array examples here to remind me to stop reinventing the wheel!

Example 1 - Creating a comma (or whatever) separated list of strings from an array.

string[] values = new string[5];
values[0] = "1";
values[1] = "2";
values[2] = "3";
values[3] = "4";
values[4] = "5";
string val = String.Join(",", values); //val = "1,2,3,4,5" - I believe Join had a VB6 equivalent

Example 1 - Search an array for a specific value

string stringvals = "1,2,3,4,5";
string [] values;
int searchresult;

values = stringvals.Split(','); // again a VB 6 equivalent was available
searchresult = Array.BinarySearch(values,"4"); //returns a negative number if it cannot find the value.

Technorati tags: ,


September 14, 2007 0:14  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

I was debugging some issues at work and we ended up discovering a compatibility issues with some the dll's. These issues were discovered primarily by looking at the Fusion logs. The fuslogvw.exe provides quick and easy access to the logs which in turn describe a variety of issues, however, I have found that binding errors are most common and easily determined.

Fusion logs describe the probing paths used to find compatible dll's and the success or failure of the search.

LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/en-US/BlogExtension.Delicious.resources.DLL.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/en-US/BlogExtension.Delicious.resources/BlogExtension.Delicious.resources.DLL.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/plugins/en-US/BlogExtension.Delicious.resources.DLL.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/plugins/en-US/BlogExtension.Delicious.resources/BlogExtension.Delicious.resources.DLL.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/en-US/BlogExtension.Delicious.resources.EXE.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/en-US/BlogExtension.Delicious.resources/BlogExtension.Delicious.resources.EXE.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/plugins/en-US/BlogExtension.Delicious.resources.EXE.
LOG: Attempting download of new URL file:///C:/Program Files/RssBandit/plugins/en-US/BlogExtension.Delicious.resources/BlogExtension.Delicious.resources.EXE.
LOG: All probing URLs attempted and failed.

The following registry location [HKLM\Software\Microsoft\Fusion\LogPath] will show you where the raw logs are stored. If you use Visual Studio.NET you will notice quite a few of your own applications have orphaned fusion logs as well as whatever .NET applications you use.

If things are going drastically wrong with a complicated multi assembly installation, then take the time to look at the fusion log files.

Technorati tags: ,


August 30, 2007 0:12  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I am doing a doing a fair share of work that makes use of automated and semi-automated builds using NAnt scripts. I am loving the convenience, however, I usually come into the project after all that stuff is set up. So while I am encouraged and able to make significant changes to NAnt build files, I am never really in on the ground floor construction.

Don passed on this really useful series of posts from Jean-Paul S. Boodhoo. He successfully explains the steps and considerations when setting up a fully automated build with NAnt.

I am pretty confident I could complete this quite quickly from scratch ... in theory!

 

Technorati tags:



July 11, 2007 23:43  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

Scott Gu is the man, and his latest post really nails the major Silverlight questions. As always his demo's are thorough and thought provoking. I have nothing to contribute here, just check out the videos!

"Blessed is the man who, having nothing to say, abstains from giving wordy evidence of the fact." - George Eliot

Technorati tags: ,


May 16, 2007 23:01  Comments [0]
Tagged in .NET | Silverlight
Share on Twitter, Facebook and Google+

Mix 07 has done nothing but ensure that I have a bunch of speculation and no sound grasp of the future. I am currently referring to the advent of the cross platform Dynamic Language Runtime (DLR). The DLR will serve as the core engine for Silverlight and helps provide MS with an in road into the world of Mac\Linux\whatever ... I wonder what Mac users think of that?

Jim Hugunins is a pioneer in the field of dynamic languages and was recruited by MS for that very reason. Here is his overview:

The DLR is about giving you the best experience for your language - true to the language, excellent tools, performance and seamless integration with a wealth of libraries and platforms. The essential benefits of the DLR are about sharing. It lets language implementers share standard features rather than rebuilding them from scratch. This lets them focus on the features that make a given language unique rather than on reinventing yet another GC system. It lets developers share code regardless of the language the code is implemented in and to use whatever language they prefer regardless of the language preferred by the environment they want to run in. Coupled with the Silverlight 1.1 platform announced today, it even lets languages share a sandboxed security model and browser integration.  This means that developers building browser-based applications can now use their preferred language even for client-side code.

I must admit I am very impressed but I am forced to wonder if this just a precursor to CLR being used cross platform. If my wild speculation is indeed the case then the MONO project is as good as dead!

"When I came back to Dublin I was courtmartialed in my absence and sentenced to death in my absence, so I said they could shoot me in my absence." - Brendan Behan



May 3, 2007 22:50  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

Silverlight is a cross platform rich media experience from Microsoft (never thought I would be using the words cross platform and Microsoft in the same sentence!). What is intriguing about Silverlight is that the programming model is based upon and immersed within the .NET framework. Together with a choice of programming languages and the presentation model using XAML, Silverlight is set to change the game significantly.

Silverlight serves as a direct competitor to technologies like Flash and while I believe that Silverlight is a great alternative to Flash I am wondering if this is too little too late. Flash is basically on every machine that I have ever sat down at, its market penetration is second to none. I am wondering if even MS can compete with that kind of distribution. Unless of course they bundle it as a priority update to your machine (approx 2Mb) then they would level the playing field quite quickly ... don't you just love monopolies!

The Silverlight evangelists have put together a series of Screencasts that capture some the great features of this technology, Enjoy!

"Knowledge is power, if you know it about the right person." - Ethel Mumford

Technorati tags: ,


May 1, 2007 22:43  Comments [0]
Tagged in .NET | Silverlight
Share on Twitter, Facebook and Google+

When C# first came out I spent an absorbent amount of time checking the language reference and ensuring that my C++ and VB6 know how was appropriately ported to this new language. So I must admit to being a little annoyed that I just found out about the is operator today.

The is operator is used to check whether the run-time type of an object is compatible with a given type. The following example highlights it practical uses.

MyClass mc;
if(mc is MyClass){
    '... do stuff
}

How useful is that? Why did I think reflection was the best way to solve this problem? When I was in Portland a couple of years ago I distinctly remember someone asking how to solve a problem of this ilk. I proudly explained the beauty of reflection and sent a long winded sample ... I was so wrong ;(

"Look back, and smile on perils past." - Walter Scott

Technorati tags: ,


May 1, 2007 22:42  Comments [0]
Tagged in .NET | C#
Share on Twitter, Facebook and Google+

Web based commerce is increasing exponentially and the companies, retailers, banks etc are responding to the diverse nature of its world wide clientele. In the US the main concerns are for english (en) and Spanish (es) speaking counterparts.

I have been doing a lot of work recently with Satellite assemblies. By definition, satellite assemblies only contain resource files (.resx). They do not contain any application code. These assemblies are generally reserved for strings that can be used to support a particular culture or language. In the satellite assembly deployment model, you create an application with one default assembly (which is the main assembly) and several satellite assemblies (en, es, etc).

Visual Studio (VS) has an implied strategy for creating satellite assemblies it looks for a specific naming scheme for .resx files.

Strings.en.resx - Implies the need for a satellite assembly named strings.en.dll
Strings.es.resx - Implies the need for a satellite assembly named strings.es.dll
Strings.resx - Will not create a satellite assembly, this will be embedded in the default assembly based on the build action.

Now in order for the default assembly to have knowledge of an associated satellite assembly there also needs to be an explicit reference applied during compilation as follows (VS does this for us).

csc.exe /debug+ /reference:..\Bin\en\strings.en.dll;..\Bin\es\strings.es.dll/out:..\Bin\MyApp.exe 'syntax maybe a little off

Updating information in  a satellite assemblies is really a trivial task, and is certainly the way to go if you are in a hurry and need to avoid a full rebuild. So taking the case where you have existing a MainApp.exe and a Strings.en.dll associated with the main app. To update the Satellite assembly do the following:

  1. Update the resx file as required (Please note you cannot add information here without recompiling the default assembly).
  2. run the following command (VS.NET cmd): resgen MyWeb/Resources/Strings.en.resx Strings.en.resources
  3. run the following command (VS.NET cmd): al /out:Strings.en.dll /c:en /embed:Strings.en.resources

"I have learned that success is to be measured not so much by the position that one has reached in life as by the obstacles which he has had to overcome while trying to succeed." - Booker T. Washington



April 10, 2007 22:16  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

This topic has been discussed actively at our company in relation to the potential effects on our customers. After receiving this email from Microsoft I thought I would post it for the good of the developer community:

Dear Valued Microsoft Customer,
In 2005, the United States government passed the Energy Policy Act of 2005. This act changes the start and end dates for Daylight Saving Time (DST) as of spring 2007. These changes may impact the way applications run. Microsoft is releasing an update for Windows through Microsoft Update that reflects these changes.
Developers who use the .NET Framework may find their applications affected if the application uses the time zone information for historical purposes or if they have derived custom classes from
System.TimeZone to provide custom time zone information. The standard System.TimeZone class provides a managed wrapper for the underlying Windows Operating System time zone functions.
In addition, developers who use Visual C++ may find their applications affected if they use the CRT time functions, or the TZ environment variable. Microsoft is currently working on a fix for this issue and will post information about its availability on the
Visual Studio Support page.
Most applications that use these affected classes will not need to be modified as this update will ensure that the correct data is provided seamlessly to the application. However, applications that use these classes or the underlying
Windows API to perform historical time look-ups will need to be modified.
In most cases, developers who have extended the .NET Framework’s time zone support by creating custom time zone classes derived from System.TimeZone, or by direct access to the Win32 API, will not have to update their applications as long as the available updates to the operating system are applied. However, solutions that rely on private time zone data, or that retrieve system time zone information by accessing the registry directly, may need to be updated. Applications that deal with historical time zone data may also need to be updated.
Microsoft advises all developers who make use of time zone data to test their applications against this update to ensure that their applications function correctly.
For more detailed information and the latest updates please visit
http://msdn2.microsoft.com/en-us/vstudio/bb264729.aspx, Preparing for daylight saving time changes in 2007, and KB928388: 2007 time zone update for Microsoft Windows operating systems.
Further Assistance
Microsoft values your business.  For more information visit
http://www.microsoft.com/dst2007, or contact Microsoft for assistance.  A list of phone numbers is located at http://support.microsoft.com.  Microsoft Premier Customers may engage their Technical Account Manager directly.



February 23, 2007 22:35  Comments [0]
Tagged in .NET | Windows
Share on Twitter, Facebook and Google+

This just up on Scott Gu's Blog ... AJAX is getting full support in ASP.NET\Visual Studio 2005. While there have been various plugins available for some time now, this represents a great commitment to the ground swell of AJAX wannabe's (me included). Microsoft's implementation of AJAX aka ATLAS has represented the Holy Grail of Web Development. It seeks to fill the gap between Postbacks and the Windows Programming model that we all would prefer.



September 13, 2006 4:34  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

There are many inconsistent explanations on the use and nature of ViewState in ASP.NET. Don found this rather detailed explanation of its existence ... the why's and the wherefore's. ViewState seems to be a staple question for ASP.NET interviews and understanding its nature is fundamental of the discipline!

"The wise man does at once what the fool does finally." - Machiavelli



September 8, 2006 4:32  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+