I have a lot of Web Forms knowledge, I mean a lot … and while no one will say Web Forms is dead, I think it would be true to say that MVC has proven to be a consistently better pattern for general web development. I have been hinting recently that I am working on the DasBlog software (which uses Web Froms) as I would like to ensure it continues to be viable and useable into the near future. However, until very recently the idea of updating DasBlog necessitated a plan to essentially start over and frankly that was out of the question.

In fact a few months back I tried integration with .NET Standard 1.6 (ASP.NET Web Core) but almost everything failed immediately and catastrophically, I did not even get a chance to hit a line of code. However, with the release of Standard version 2.0 I was able to get things up and running immediately. To be clear I could have just settled for upgrading the .NET version to vNEXT but I was more interested in moving in a more platform agnostic direction.

I need to do more thorough testing of course but initially it looks .NET Standard 2 covers code developed for DasBlog which goes back as far as 2003! That is pretty amazing API coverage!

Regardless of the API coverage, there are some very practical and historic problems that I have to address, the first is that almost every URL on my site ends with aspx, unnecessarily exposing the platform I am using, and while one could argue that the SEO is great, the URLs are objectively ugly.

AddIISUrlRewrite

In this post I wanted to look at one way to solve the problem of redirecting relatively static pages to more modern MVC ones. Now in ASP.NET Core there is a fascinating piece of middleware called ADDIISUrlRewrite. It allows me to redirect (30x) my old aspx pages to my new core MVC pages before getting to the MVC pipeline, here are three examples I want to change and what I want to redirect to:-

default.aspx?page=3
/page/3

monthview.aspx?month=2017-06
/archive/2017/06

^SyndicationService.asmx/getrss
/feed/rss

To accomplish this I updated my ASP.NET Web Core to include the Microsoft.AspNetCore.Design Nuget package, then in the Startup.cs added the following namespace:

using Microsoft.AspNetCore.Rewrite;

Then in the Configure method (used to make changes to the the HTTP request pipeline) I add the following:

var options = new RewriteOptions()
                 .AddIISUrlRewrite(env.ContentRootFileProvider, @"Config\DasBlogIISUrlRewrite.xml");

app.UseRewriter(options);

The xml file (DasBlogIISUrlRewrite.xml) is a snippet of the rewrite settings normally found in the web.config, here is how I solved the three redirects I needed:

<rewrite>
  <rules>    
    <rule name="Redirect front page" stopProcessing="false">
      <match url="^default.aspx" />
      <conditions>
        <add input="{QUERY_STRING}" pattern="&amp;?page=(.*)" />
      </conditions>
      <action type="Redirect" url="/page/{C:1}" redirectType="Permanent" />
    </rule>
    
    <rule name="Redirect RSS syndication" stopProcessing="true">
      <match url="^SyndicationService.asmx/GetRss" />
      <action type="Redirect" url="/rss" redirectType="Permanent" />
    </rule>
    
    <rule name="Redirect Month page (year-month)" stopProcessing="false">
      <match url="^monthview.aspx" />
      <conditions>
        <add input="{QUERY_STRING}" pattern="&amp;?month=(.*)-(.*)" />
      </conditions>
      <action type="Redirect" url="/archive/{C:1}/{C:2}" redirectType="Permanent" />
    </rule> 
  </rules>
</rewrite>


July 19, 2017 2:27  Comments [0]
Tagged in ASP.NET Core
Share on Twitter, Facebook and Google+

After finally understanding what the .NET Standard is and why we need it, the next thing I wanted to do was investigate which of the many binaries, projects and packages I have used that will continue to work with the .NET standard? Thankfully there are a couple of tools that are going to make this relatively straightforward.

Visual Studio Plugin

The obvious place for this analysis is Visual Studio, more specifically the .NET Portability Analyzer plugin, you can install it from the gallery like this:

  • Tools->Extensions and Updates...
  • Online >Visual Studio Gallery
  • Type in .NET Portability Analyzer and download/install it.
  • Visual Studio will probably require a restart

Once installed you have access to two new context menu commands which you activate by right clicking your project:

  • Analyze Project Portability
  • Portability Analyzer Settings

Executing the Analyze Project Portability against your Visual Studio project will produce an excel sheet which will give you a score ranging from 0-100% compatibility.

Analyze Project Portability Excel Results

In my example I am not 100% compatible which means some portion of this project does not conform to .NET Standard, clicking on the Details tab give more specific reasons on what is failing.

Analyze Project Portability Details

Command Line Tool

If you are not inclined to use Visual Studio or you are more interested in collecting these results during your continuous or official builds you could also take advantage of the command line tool (it is the same engine to the VS plugin). To get this setup, do the following:

Now you can execute the command line script as follows:

C:\debug\ApiPort.exe analyze -f C:\test\Test.dll

As with the Visual Studio plugin this produces and excel sheet by default, however, this example is again only selecting defaults for comparison. For a complete list of targets you can run the following command, versions marked with an asterisk are the defaults:

C:\debug\ApiPort.exe listtargets

So if I want to change the target to say, an older version of the standard like 1.3, then I can run a command like this:

C:\debug\ApiPort.exe analyze -f C:\test\Test.dll -t ".NET Standard, Version-1.3"

This just makes integration with an existing build process much more straightforward, you could probably even wire this up to alarm if someone uses an API with limited supported.

Checking NuGet Packages

There is a very useful community project called I Can Has .NET Core, which allows you to upload your projects packages.config, project.json and paket.dependencies for analysis, the site then builds a visualization of the package and determines whether the equivalent .NET standard versions are available on nuget.org. As an added bonus you can point the site at your GitHub repo and it will automatically scour it for packages as well.

I Can Has dotnet core

Your dependencies are categorized as Supported, Known Replacement Available, Unsupported and Not Found.



July 6, 2017 3:54  Comments [0]
Tagged in .NET | Tools
Share on Twitter, Facebook and Google+

Over the last few years .NET developers have been given an opportunity to develop software targeting a genuinely diverse set of devices, operating systems and platforms (.NET Framework, Xamarin, .NET Core). This has been mostly a blessing but the subtle differences in each stack (sometimes not so subtle) has started to create underlying issues around portability. So if you created a handy Nuget package the question of which .NET developers could take advantage of it has become a complex question, and this is mostly due to the various implementations of Base Class Library (BCL).

As you know the BCL contains your primitive types and frankly each version of .NET was created by a multitude of different teams of developers and so namespaces and classes got slightly different implementation, again negatively impacting your chances of true cross platform development. So how do you unify the various BCLs?

.NET Standard

Why the .NET Standard?

The .NET Standard is a specification and it represents a set of APIs that all current and future .NET platforms have to implement.

  • The .NET Standard defines what is consistently available across each version of .NET
  • Cross platform developers can focus on mastering the standard rather than the platform

This may sound immediately familiar in that this was what Portable Class Libraries were supposed to accomplish, and to a certain extent it did. The problem here was that the PCL was an afterthought rather than a strict standard, and so each .NET platform team could decide whether or not they would implement an API, this inconsistent approach was immediately problematic for everyone outside of Microsoft.

Which .NET Standard to use?

Higher version of the .NET standard include the lower versions so if, for example, you build .NET Standard version 1.6 application you are compatible with version that include 1.3 and/or 1.0. To see exactly what API and namespaces you get to target check out the .NET API Browser. The APIs of .NET site also allows you to search for a particular class and see which standard and platforms are support.

.NET Standard Compat table

Generally speaking you should select the lowest version of the .NET Standard that you can accept, this will provide the broadest base of compatible platforms and user experiences.

Over the next few weeks I am going to checking out how this approach can assist me bringing an application like DasBlog up to speed.




June 29, 2017 5:00  Comments [0]
Tagged in .NET
Share on Twitter, Facebook and Google+

I certainly do not live under the assumption that compilers or platforms are flawless, however, it is not often I think about the ways in which security vulnerabilities can be introduced via the quest for constant improvement. A researcher over at Microsoft, Nuno Lopes, wrote a fascinating article about how the important improvements may inadvertently alter the surface area of viable compilers:

Compilers are big: most major compilers consist of several million lines of code. Their development is not stale either: every year, each compiler sees thousands of changes. Their sheer size and complexity, plus the pressure to continuously improve compilers, results in bugs slipping through. These compiler bugs may in turn introduce security vulnerabilities into your program.

In response to this continuous development Microsoft has developed a paper that presents Alive, a domain-specific language for writing optimizations and for automatically and proving them correct.

Developing a system that can automatically check the veracity of a compiler gives me the feeling that I am one step away from losing my vocation.



June 26, 2017 3:19  Comments [0]
Tagged in Research | Security
Share on Twitter, Facebook and Google+

In my formative years I assumed that I would have a career in some field that leaned heavily on classical physics. I simply loved the mathematics and how it could capture the motions of the physical world in a few relatively simple formulas. Isaac Newton helped define the idea of inertia in his most famous work Mathematical Principles of Natural Philosophy. Here is an excerpt:

"The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavours to preserve its present state, whether it be of rest or of moving uniformly forward in a straight line."

So objects travelling (in a straight line) are said to have momentum and in order to change that momentum we have to overcome it's inertia. Given that objects endeavor to "preserve its present state" change is not always easy. So if a big truck (lots of mass) is going really quickly (lots velocity) then it has lots of momentum and inertia is the force necessary to alter its course or trajectory. The greater the momentum the more the difficult the object is to influence.

Software is resistant to change

When you create a software service, or platform the day you release it for public consumption you make a binding contract with developers and customers who have decided that your platform is useful to them. Every part of your software creates a kind of inertia of its own that is resistant change, and poor design is one of the more obvious ways platforms become resistant to innovation.

Just like objects in motion active software can resist innovation, indeed, many decisions you make are weighted by the past. Making changes today is dependent on considerations that existed long before you started thinking about adding new features. I have personally made a career of managing brown field applications that need careful curation and I get to see how we inadvertently create unhealthy inertia when tending to our projects.

There are several ways to measure momentum of your software and we can use that definition to understand the inertia we are matched against when attempting to make meaningful change. If we first take a look at the equation for momentum in physics it is as follows:

MOMENTUM = MASS * VELOCITY

I have used this equation as the starting point for defining software momentum, and we can redefine these variables to complete the analogy:

MASS - The sum of the number of versions of the software you actively support
VELOCITY - The rate at which you are adding features or fixing bugs

The momentum of your legacy app becomes the product of these two variables, and it directly relates to the amount of energy needed to introduce truly innovative transformations. You can immediately see how legacy applications create massive amounts of inertia, because after years of creating supported versions and not necessarily having a plan to sunset the older ones, your psychic weight looks backwards rather than forwards . Maintaining older versions of software enforces existing relationships, and influence the nature of all your departments (development, infrastructure, sales, support) and cement a rigid agreement between your software and the customer. 

The more inertia you inadvertently accumulate the more intractable certain problems become, and it is at this point that wildly successful platforms become ripe for disruption. They simply lose the ability to respond to customers in new ways.

Newly developed applications, on the other hand, allow you to immediately alter relations in pursuit of a more profitable or efficient services. Ever noticed how some app or social network comes along and appear to completely disrupt existing services? The low mass of the software allowed them to tackle a problem that larger platforms simply could not adjust to efficiently even with massive amounts of resources and manpower.

How your organization functions is partly defined by the design of your applications and its ability to evolve and adapt. Good design forces your software to remove unnecessary mass, bad design simply adds to its mass, making change a laborious and counterintuitive. Just as important are the ways in which other departments develop around your software inertia, everyone gets to influence inertia not just those writing code.

So my fellow developers, how are you overcoming inertia in your software lifecycle?



June 17, 2017 2:06  Comments [0]
Tagged in Development Process
Share on Twitter, Facebook and Google+

I am not sure the position I am taking here is for everyone, in fact I am sure that many organizations producing much better content than me will strongly disagree. However, my position on AMP has changed from “let’s see” to a firm “absolutely no chance”. As I wrote my original post on how DasBlog could be modified to support AMP, it was not until the final moment that I actually understood how caching was changing how consumers would interact with my content. If I may quote myself:

You actually go to this:

https://www.google.com/amp/poppastring.com/blog/IntegratingGooglesAMPProject.aspx?amp=1/

I understand why caching is important for further speed improvements but I am very protective of my URL, and it feels like I am ceding control to Google somehow. I am hoping that the promised SEO bump will be worth the sacrifice, we shall see.

So that is awful, what else?

One of my knee jerk reactions to hearing “Open Source” attached to a project is to assume that the community has selected and is indeed driving a particular change, and that notion is not even close to the truth, and its frankly naïve conjecture.

The worst problem with AMP is that it has happily eschewed the most important standard on the web, HTML, for something else. This alternate HTML forced me to either rewrite all my content (going back 10 plus years) or create a ham-fisted HttpModule that modified my content on demand. Then, without a hint of irony, compels you to remove your own JavaScript files (because they are bad and slow) and include alternate JavaScript library, the only difference is that but Google controls it.

My conclusion is that we can do fast, mobile web apps without AMP, the techniques are well documented and losing control of your URL is not worth the caching benefits.



May 31, 2017 1:18  Comments [0]
Tagged in Google | SEO | Web
Share on Twitter, Facebook and Google+

An engrossing read from an anonymous British security researcher, MalwareTech, who with lots of skill (and maybe a little luck) managed to stop the advance of a ransomware outbreak.

NHS systems all across the country being hit, which was what tipped me off to the fact this was something big. Although ransomware on a public sector system isn’t even newsworthy, systems being hit simultaneously across the country is … I was quickly able to get a sample of the malware with the help of Kafeine, a good friend and fellow researcher. Upon running the sample in my analysis environment I instantly noticed it queried an unregistered domain, which I promptly registered.

Apparently one of the ways in which malware avoids detection is by attempting to ping an a known IP address before commencing its attack on the system, this is a subtle check to ensure that researchers are not monitoring the malware from within a controlled environment.

I believe the malware creators were trying to query an intentionally unregistered domain which would appear registered in certain sandbox environments, then once they see the domain responding, they know they’re in a sandbox and the malware exits to prevent further analysis. This technique isn’t unprecedented: the Necurs trojan queries five totally random domains, and if they all return the same IP it exits…

However, because WannaCrypt used a single hardcoded domain, my registration of it caused all infections globally to believe they were inside a sandbox and exit... thus we unintentionally prevented the spread and further ransoming of computers infected with this malware. Of course now that we are aware of this, we will continue to host the domain to prevent any further infections from this sample.

This is a fascinating tale, and you can read the whole thing over at Ars Technica.



May 25, 2017 0:28  Comments [0]
Tagged in Security
Share on Twitter, Facebook and Google+

At the conclusion of Build the resounding idea was that as developers we are empowered to develop code for any platform on any platform, additionally Microsoft is still aggressively pushing the idea of productivity in a maturing virtual world through devices like HoloLens. This is both a powerful idea and one that will bolster the enterprise narrative for Microsoft for many years to come. It could be firmly argued that Microsoft is turning into IBM at incredible rate (but that is a different post).

In contrast to the coding ubiquity promised by Microsoft, what jumped out about the Google I/O keynote was the transformative potential of Google Lens technologies is that it can identify objects in the real world with your smartphone camera.

"Google Lens is a set of vision-based computing capabilities that can understand what you're looking at and help you take action based on that information” - Sundar Pichai

The stage demos were impressive pointing the camera at a flower and recognizing the type, or viewing a router and automatically joining a wireless network by recognizing a network name and password label. Impressive!

What is important about this strategy is that we now have another path to understand, catalogue and search the real world. The current web has forced us to duplicate our real world in digital text such that it becomes easier to recognize, reference and recall. So someone had to laboriously convert what was seen, observed and heard into to text, and search engines have gotten really good at understanding those pages with remarkable accuracy and precision. Imagine the way in which we search html for keywords today, how searches can decipher intent and use html links to determine relative rankings.

Now combine this idea with machine intelligence which will allow us to search audio and, more importantly, video. Our searches could see context within a video; the clothes being worn; the car being driven; the locations being passed; the facial responses and the sources that inspired them.

There is an entirely new layer of data within the grasp of a company that understands search, and the ways in which it can be catalogue and monetized.

The future looks bright, and Google can see it!

Related Posts



May 19, 2017 3:20  Comments [0]
Tagged in Artificial Intelligence | Cloud Services | Google
Share on Twitter, Facebook and Google+