Parsing command line arguments in .NET Core

1 -> Experimenting with the Kudu API

2 -> Building and Packaging .NET Core with AppVeyor

3 -> Parsing command line arguments in .NET Core

I’m working on a little command line too called k-scratch that allows you to interact with the Kudu API to pull and push files (and monitor them locally for upload) as well as watch the log stream – all from the command prompt.

Now that I’ve made the decision that this project will not be a UWP, I’m refocusing on the console app.

How to parse?

I realised my app was going to do more than the basic args[] input and I’d need to do some work to parse and organise the command line inputs in to recognisable commands.

I had a hunt around and found a few things.

I started off investigating commandline by gsscoder. I managed to get it going, but I had some troubles with .NET Core. Their current published nuget packages do not support .NET Core. They have a branch, which worked – but without a working build I can reference in my app it’s a bit too much overhead to manage for this project.

I started out forking commandline and modifying the AppVeyor.yml file to get a nuget package out – which worked, but ran in to a few issues down the line which had me searching around for another solution.

I came across this thread on the dotnet cli page after @GeoffreyHuntley suggested I search for it.

//platform.twitter.com/widgets.js

Great, so I went and found System.CommandLine which is the component that the dotnet CLI is using (seemingly!). It’s not published as part of the main core framework – it’s in the labs. It’s also not on the main Nuget, only on MyGet.

I had to adjust my nuget package sources to add the myget feed

> nuget sources add -name "Corefxlab" -Source "https://dotnet.myget.org/F/dotnet-corefxlab/"

I also did the same in my AppVeyor.yml file which worked nicely.

before_build:
- nuget sources add -name "Corefxlab" -Source "https://dotnet.myget.org/F/dotnet-corefxlab/"

Great now the package is installed!

I’ve so far had a play around with it working in .NET Core from the samples on the System.CommandLine site.

It’s super easy to use. You have Commands and Options. A command will come paired with following options until another command is raised.

ks pull -p
ks commit -m -amend

etc.

var command = string.Empty;
var prune = false;
var message = string.Empty;
var amend = false;

ArgumentSyntax.Parse(args, syntax =>
{
    //syntax.DefineOption("n|name", ref addressee, "The addressee to greet");

    syntax.DefineCommand("pull", ref command, "Pull from another repo");
    syntax.DefineOption("p|prune", ref prune, "Prune branches");

    syntax.DefineCommand("commit", ref command, "Committing changes");
    syntax.DefineOption("m|message", ref message, "The message to use");
    syntax.DefineOption("amend", ref amend, "Amend existing commit");
});

Console.WriteLine($"Command {command}, Prune {prune}, Message {message}, Amend {amend}");

Now to go and make it do stuff!

Dependency Injection with WebAPI, Service Fabric and Autofac

Sample code

I really like the way Azure Service Fabric exposes actors and services from the factory as nicely typed interfaces in C#. It opens up some great scenarios for dependency injection and generally improving the developer experience.

To this end I decide to try and get Autofac going with the WebAPI Visual Studio template for Service Fabric.

The end goal is being able to pass services in like any other injected dependency like this:

private readonly IStateless1 _service;

public SomeRepo(IStateless1 service)
{
    _service = service;
}

public async Task<string> GetSomething()
{
    return await _service.GetHello();
}

You will need to install the Service Fabric SDK for this stuff.

Interestingly the template that is created when you add a WebAPI project to the Service Fabric project is not your traditional WebAPI affair – in this case we rely heavily on OWIN to build a self hosted WebAPI project almost from scratch (the template does handle most of this for you though).

When I saw the template generated code I feared that Autofac would not “just slot in” – it’s quite different to the regular WebAPI. That was not the case it turns out!

Because the WebAPI template makes great use of OWIN, the standard Autofac WebAPI Owin stuff worked pretty much straight away.

The magic happens in the Startup.cs file as per usual.

First you create a new ContainerBuilder and register the controllers.

 var builder = new ContainerBuilder();
 // Register your Web API controllers.
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());    

The next code is from the template – create the HttpConfiguration and set up the MVC routes.

That config is then passed in to the Autofac extension to register the filter providers (so you can inject in to filters).

builder.RegisterWebApiFilterProvider(config);    

Now the fun part – registering our own services.

I created the amazingly named Stateless1 reliable service. It exposes IStateless1 which has the following method exposed on it:

public async Task<string> GetHello()
{
     return $"This is a test {DateTime.Now}";
}

I then register this using the overload of register that allows you to run some callback code lambda style.

builder.Register((e) => ServiceProxy.Create<IStateless1>(
     new Uri("fabric:/Application3/Stateless1")))
     .As<IStateless1>();

Once that is done, build the container and you’re set!

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);
appBuilder.UseWebApi(config);

I like to separate out my services a little bit from the code, even if that code is only referencing the interfaces so I’ve placed the actual access to the IStateless1 in to a repo that is my code (controllers) will access, rather than accessing the IStateless1 interface directly. This allows centralisation of the “SDK” to a library that other parts of your code can use.

public interface ISomeRepo
{
    Task<string> GetSomething();
}

public class SomeRepo : ISomeRepo
{
    private readonly IStateless1 _service;

    public SomeRepo(IStateless1 service)
    {
        _service = service;
    }

    public async Task<string> GetSomething()
    {
        return await _service.GetHello();
    }
}

https://github.com/jakkaj/AutofacServiceFabricExample/blob/master/src/WebApi1/Model/SomeRepo.cs

Note that IStateless1 is injected here. Once that is done, register the ISomeRepo with the container back in Startup.cs

builder.RegisterType<SomeRepo>().As<ISomeRepo>();

Finally – you can inject ISomeRepo in to your controller and start to see the results!

private readonly ISomeRepo _someRepo;

public ValuesController(ISomeRepo someRepo)
{
    _someRepo = someRepo;
}
// GET api/values 
public async Task<IHttpActionResult> Get()
{
    return Ok(await _someRepo.GetSomething());
}

Here is the full Startup.cs file

public static void ConfigureApp(IAppBuilder appBuilder)
{
    var builder = new ContainerBuilder();

    // Register your Web API controllers.

    builder.RegisterApiControllers(Assembly.GetExecutingAssembly());

    // Configure Web API for self-host. 
    HttpConfiguration config = new HttpConfiguration();    

    config.Routes.MapHttpRoute(

        name: "DefaultApi",

        routeTemplate: "api/{controller}/{id}",

        defaults: new { id = RouteParameter.Optional }
    );

    builder.RegisterWebApiFilterProvider(config);

    //Register the repo that our code will use to abstract the end code one level from the actor
    builder.RegisterType<SomeRepo>().As<ISomeRepo>();

    //Register the actor.
    builder.Register((e) => ServiceProxy.Create<IStateless1>(new Uri("fabric:/Application3/Stateless1")))
        .As<IStateless1>();

    // Set the dependency resolver to be Autofac.
    var container = builder.Build();
    config.DependencyResolver = new AutofacWebApiDependencyResolver(container);
    appBuilder.UseWebApi(config);
}

Full listing of https://github.com/jakkaj/AutofacServiceFabricExample/blob/master/src/WebApi1/Startup.cs

Building and Packaging .NET Core with AppVeyor

1 -> Experimenting with the Kudu API

2 -> Building and Packaging .NET Core with AppVeyor

3 -> Parsing command line arguments in .NET Core

I’ve been working on a project called k-scratch which allows remote logging and file sync with Kudu based sites – mainly for making local editing of Azure Functions easier.

As part of that I broke out the log stream component (called KScratchLog) in to a standalone console app. I plan to make this in to a broader console app that can get, push, log stream etc all from the command prompt… but before any of that can happen I figured I should get some CI going.

I decided on AppVeyor because it has .NET Core support, is free and works well with GitHub (like super well).

AppVeyor allows you to run PowerShell and CMD scripts, and the environments that builds are run in will be familiar to most .NET developers.

Most of the heavy lifting and config is done by placing an AppVeyor.yml script in the root of your GitHub Repo.

I had a hunt around, and saw that some projects use custom build scripts with AppVeyor in conjunction with the yml file, but I wanted to try and do it all in the yml.

Searching I found an example yml file by Steven Liekens that I used as a starting point.

Setting up the project.json files

I created my project in Visual Studio. It has a console app and a series of portable projects that are .NET Standard 1.3 based.

The first thing I had to do before I could get it to build on the command line using dotnet build was reference the dependency projects in the project.json file. Visual Studio did not do this automatically as it relies on the references in the .xproj files.

"dependencies": {
"Autofac": "4.1.0",
"Microsoft.NETCore.Portable.Compatibility": "1.0.1",
"NETStandard.Library": "1.6.0",
"System.IO": "4.1.0",
"System.Xml.XmlSerializer": "4.0.11",
"KScratch.Entity": {
"target": "project"
},
"KScratch.Contract": {
"target": "project"
}

The next step was making sure the build outputted .exe files, which it doesn’t by default. This is done in project.json.

Scott Hanselman’s post on self contained apps in .NET Core was a handy reference for this.

"runtimes": {
"win7-x64": {},
"osx.10.10-x64": {},
"ubuntu.14.04-x64": {}
}

Also make sure you reference the portable projects here too:

"frameworks": {
"netcoreapp1.0": {
"imports": "dnxcore50",
"dependencies": {
"Microsoft.NETCore.App": {
"version": "1.0.1"
},
"KScratch.Contract": {
"target": "project"
},
"KScratch.Entity": {
"target": "project"
},
"KScratch.Portable": {
"target": "project"
}
}
}

The final part of the story was getting the build to work. I played around on the command line on my local machine first to get it going before transporting the commands in to the build_script section of the AppVeyor.yml file.

I also added the ability to build separate platforms in the script, for now only windows is present.

Worth noting is that the AppVeyor platform would not support win10-x64 so I had to change it to win7-x64.

Once the build is completed and dotnet publish is called I package up the file using 7z, before referencing that zip as an artefact.

You can see a sample build output here and the resulting artefact here.

Finally – I went to the AppVeyor settings and got the MD for the all important AppVeyor build status badge and inserted it in my readme.md file!

Build status

Full AppVeyor.yml listing from here

version: '1.0.{build}'
configuration:
- Release
platform:
- win7-x64
environment:
  # Don't report back to the mothership
  DOTNET_CLI_TELEMETRY_OPTOUT: 1
init:
- ps: $Env:LABEL = "CI" + $Env:APPVEYOR_BUILD_NUMBER.PadLeft(5, "0")
before_build:
- appveyor-retry dotnet restore -v Minimal
build_script:
- dotnet build "src\KScratch.Entity" -c %CONFIGURATION% -r %PLATFORM%  --no-dependencies --version-suffix %LABEL%
- dotnet build "src\KScratch.Contract" -c %CONFIGURATION% -r %PLATFORM% --no-dependencies --version-suffix %LABEL%
- dotnet build "src\KScratch.Portable" -c %CONFIGURATION%  -r %PLATFORM% --no-dependencies --version-suffix %LABEL%
- dotnet build "src\KScratchLog" -c %CONFIGURATION% -r %PLATFORM% --no-dependencies --version-suffix %LABEL%
after_build:
- dotnet publish "src\KScratchLog" -c %CONFIGURATION% -r %PLATFORM% --no-build --version-suffix %LABEL% -o artifacts\%PLATFORM%
- 7z a zip\KScratchLog_%PLATFORM%.zip %APPVEYOR_BUILD_FOLDER%\artifacts\%PLATFORM%\*.*
#test_script:
#- dotnet test "src\KScratch.Tests" -c %CONFIGURATION%
artifacts:
- path: zip\**\*.*
cache:
- '%USERPROFILE%\.nuget\packages'
on_finish: # Run the demo to show that it works

Internet Reliability Log – using Functions and Application Insights

Just posted over on GitHub my experiences using Azure Functions, PowerShell, Task Scheduler and more to log internet reliability data to Application Insights.

Check out the full article and code here.

Using Azure Functions, PowerShell, Task Scheduler, Table Storage and Application Insights

Jordan Knight, Feb 20, 2017

*Note: You will need a Microsoft Azure account for this. If you don’t have one you may be eligible for a free trial account.

Since getting an upgrade recently my home internet has been very unstable with up to 39 drop outs a day.

I did the usual thing and rang my provider – only to be told that “it’s currently working” and there is not much they can do. Of course, it was working when I called.

So I called when it wasn’t. The tech comes out a couple of days later. “Oh, it’s working fine”. He tinkered with it and left.

It’s still dropping out to the point that I’m having to tether my phone to my home PC.

So I figured I’d collect some data. Lots of data.

I of course had a look around to see if something could do it – the solutions I found were non-intuitive or cost money. Nope – CUSTOM BUILD TIME. There, justified.

I had a think around how I might go about this – I didn’t want to spend too much time on something that was already costing me time. How to throw together a monitoring system without spending too much time?

The system I came up with uses Azure Functions, Table Storage, Application Insights, Powershell and Windows Task Scheduler. I threw it together in a couple of hours at most.

Basic Flow

The process starts with a PowerShell script that is fired by Task Scheduler on Windows every 1 minute.

This script calls the UptimeLogger Azure Function which logs the data to an Azure Storage Table.

I then have a Processor Azure Function that runs every minute to check to see if there are any new entries in the Azure Table. If not – then we know that there has been some downtime.

This processor function sends the results about up-time to Application Insights.

In Depth

Table Storage

Set up an Azure Storage Account.

  • Click here to get started.

  • Enter a name for your new storage account (I called mine internetuptime – all lower case!).

  • Create a new Resource Group called intetnetmonitoring which will allow you to keep all your bits for this project in the same place.

  • Once that is created you should be able to install StorageExplorer and browse to it. This will be a good way to debug your service later.

Azure Functions

If you know about functions, grab the function code from here and set up up.
There are a couple of inputs and outputs to think about and some project.json files to work with for Nugets FYI.

I know about functions, skip me to the next bit

Next you need to set up the Azure Functions. These provide an end point for your pings as well as background processing that does the actual up/down calculation and sends data to Application Insights.

  • They are super easy to get going – click here to get started.

Select consumtion plan if you’re not sure what to select there.
Use your new Resource Group called intetnetmonitoring so you can group all the services for this project in the one place.

img1

  • Next go in to the editor so you can start editing your new functions. If you can’t locate it, look under the App Services section.

  • Add a new function called InternetUpLogger.

  • Filter by C# and API & WebHooks then select the HttpTrigger-CSharp option. Enter the InternetUpLogger and click Create.

This will create a new function that will act as a web endpoint that you can call.

Create the Function
YouTube

You will see a place to drop some code. This is a basic azure function.

Before you can edit the code you need to add some outputs.

Azure Functions can look after some inputs and outputs for you – so you don’t have to write a lot of code and config to say read from a database and write to a table.

  • Click on Integrate, then click New Output. Select Azure Table Storage from the list and click Select.

Next you’ll need to set up the connection to your table storage if you’ve not done so already in this Function.

  • Click New next to Storage account aonnection and select the account from the list.

You may want to change the output table name here to something like pingTable. You will need to remember this for later when we take this new table as input in another function.

  • Once that is completed, click Save.

You can expand the documentation to see some examples of how to use the new output.

Add The Output
YouTube

Now you can paste in the function code from here

Some points of interest

Note the ICollector is passed in to run automatically. This is the param you configured. In my video it’s called outTable, you may need to change it OOPS!

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req,ICollector<UptimePing> outputTable, TraceWriter log)
{
}

The next interesting thing is the RowKey I’m setting.

var dtMax = (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19");

var ping = new UptimePing{
    RowKey = dtMax,
    PartitionKey = data.MachineName,
    PingTime = DateTime.UtcNow
};

outputTable.Add(ping);

You cannot sort on Azure Table rows by default using LINQ etc. RowKey are auto ordered in descending format. So we make sure that the newer the row, the higher the number is by subtracting from DateTime.MaxValue. This will be handy later when we want to get out the latest pings to analyse recent data.

Once that is done we pop it in to the ICollector which will go and add it to the table for us! Too easy!

PowerShell

The next step is to set up the PowerShell script to call the function on a scheduler.

  • Copy the URL of your function from just above the code bit on the Develop tab – you’ll need this in a second.

  • Grab the PS1 file and copy it somewhere on your machine (or just run it from you GitHub checkout place).

  • Edit it to insert your function URL in to the indicated spot.

Ping PowerShell Script

  • Jump in to PowerShell and try it out (hint, go to the directory and type PowerShell in the explorer bar at the top).
.\pinger.ps1

Make sure that it prints out something saying that it worked 😛

  • Next create a new Scheduler Job – from Start Menu search for Task Scheduler.

Add The Task

YouTube

  • For the trigger, select any start time (in the past works best) and then have it repeat every 1 minute.

  • For the action, have it call powershell.exe and pass in the argument -ExecutionPolicy Bypass

Now you can check if it’s working by going back in to the Azure Function and watching the log. Also, you can check that your table was created by exploring with Azure Storage Explorer.

Background task to process the data

In order to know if we’re up or down, then do stuff based on that we need something to process our data.

Azure Functions can be called via HTTP (as we are above) – but they can also be called in many other ways – including on a schedule.

  • Create a new function called InternetUpProcessor that is a TimerTrigger-CSharp.

Create the processor function

YouTube

  • Set the cron expression to one minute:
0 */1 * * * *
  • You’ll also need to pass the table that you created as the output in the first function to the input of this function. In the YouTube video I called it outTable, but you may have renamed it to pingTable or something.

  • Next you need to add another different output table to store the actual up-time/down-time results.

  • Create a new output to an Azure Table called uptimeTable. This will be passed in to the function.

  • At the same time you’ll need to create another table input that also points to uptimeTable… this is so we can check it to see if the system was already down or not and do extra processing.

Create uptime inputs and outputs

YouTube
– Now you can copy in the code for the function from here.

You may note that the function is not building. That’s becuse it uses some Nuget packages that are not available by default.

To add nuget packages you first need to add a new file to your function called project.json.

Add nugets

YouTube

  • Click on View Files and add a new file called project.json. Paste in the content from here and save.
  • You should see packages restoring when you view the log.

Application Insights

Next we need to create a new Application Insights app in Azure.

  • Click here to create a new Application Insights app.
  • Leave the values default and choose the resource group you used for your other things.
  • Once you has been created you can collect you instrumentation key from the properties tab on the new Application Insights resource.
  • Paste that key in to the indicated spot in the function code.

aiproperties

Once you’re recieving telemtry you can do a couple of searches in the Application Insights interface to visualise your internet connection stability.

graphs

I went in and added a couple of metric graphs.

  • Click on Metrics Explorer. If there is not already a graph to edit, click to add one.

I added two.

downtime

This graph shows downtime in minutes. So you can see over time how many minutes your system is out.

metricstate

This one is the state (1 or 0) of the connection over time. Somtimes it will show as in between 1 and 0 – this is the average of the state during the measurement window.

If you want to see actual downtime events you can add a new search.

  • Click on Search from the Overview panel of Application Insights.

state filter

  • Click on filters and search for state. Select false.

This will filter down to events that have a state of false… i.e. INTERNET DOWN. You could also look for the InternetDown event which will show the times when the internet went down as opposed to the timeranges it was down.

outputinaction

This isn’t that the internet went down 96 times, it’s that it was down during 96 sampling periods. The InternetDown event shows the amount of times it went down.

That’s prety much it! You’re done.

Extra Credit – SpeedTest

I added a speed test using this same project for s&g’s.

  • There is another function here that you can install.

  • Then grab the code from here.

  • Edit Upload.cs and paste in your new Speedtest function url.
  • Build it and create a new Scheduled Task for every 15 mins (or what ever).
  • In Application Insights metrics explorer, add a new graph of UploadSpeed_MachineName and DownloadSpeed_MachineName (same graph, they can overlay).

Extra Credit – Push

I’ve set my system up to create pushes.

I did this by creating a new maker url call back channel on IFTTT which passes through the value to a push notification. This then sends the push to the IFTTT app on my phone without me needing to write an app just to recive a push.

It’s outside the scope of this article to go though that, but you can see the remenants of it in the InternetUptimeProcessor funcion.

pushsetting

If you get stuck, ping me – I’d be happy to expand this article to include it later.

Cheers,

Jordan.

Experimenting with the Kudu API

1 -> Experimenting with the Kudu API

2 -> Building and Packaging .NET Core with AppVeyor

3 -> Parsing command line arguments in .NET Core

I’ve been playing around with the idea of some remote tools for Azure Functions using the Kudu API, mostly to make a nice way to edit and run Azure Functions from a local machine (large screen, easier access etc).

The repo is here https://github.com/jakkaj/k-scratch. It’s a work in progress.

I figure the parts I need are:

  • Log in from PublishSettings.xml
  • LogStream
  • Ability to query the functions in the site
  • Ability to list / download files
  • Ability to upload files

I decided to build the libraries on top of .NET Core.

I thought initially about a UWP app -> click + to add a new publish settings which will add a vertical tab down the left hand side. Each tab would be a little window in to that function. The LogStream, the files, editing + save would immediately upload and run it.

Perhaps I’ll get to that, but for now I’m building each bit as a standalone console app (.NET Core based) that takes the PublishSettings.xml as param.

So far I have the LogStream working, which was actually pretty easy!

You can grab a PublishSettings.xml file from your function by going to Function app settings / Go to App Service Settings / Click “…” and click Get Publish Profile.

You load the PublishSettings XML in to a nice entity. I created an entity to load the XML in to by using Xml2Csharp.com. Result entity here.

See this file for how the XML is loaded in to the entity.

The PublishSettings.xml file contains the Kudu API end points, and also the user and password needed to authenticate. Authentication is done using the basic authentication header.

Convert.ToBase64String(
                Encoding.UTF8.GetBytes($"{settings.UserName}:{settings.Password}"));

Once I had that organised, I could start calling services.

I tried a few ->

  • GET /api/zip/site/{folder} will zip up that folder and send it back
  • GET /api/vfs/site/{folder}/ will list a path
  • GET /api/vfs/site/{folder/filename will return that file

etc.

Great, I can get files! Change the HTTP method to PUT and I can upload files.

I then tried some LogStream. Using the new Bash shell in Windows, i was able to straight up Curl the log stream.

 curl -u username https://xxx.scm.azurewebsites.net/logstream

Straight away logs are streaming. Too easy! Next step was to see if the .NET Core HttpClient can stream those logs too.

 _currentStream = await response.Content.ReadAsStreamAsync();

Works really well it turns out – just read the stream as lines come in and you have yourself log ouput.

using (var reader = new StreamReader(_currentStream))
                    {
                        while (!reader.EndOfStream && _currentStream != null)
    {
         //We are ready to read the stream
         var currentLine = reader.ReadLine()

Full implementation of that file here.

Then I added a simple app – KScratchLog which takes the PublishSettings.xml path as the parameter and will start showing logs.

So why not just use curl? Simplicity of loading the PublishSettings.xml to get user/pass/API endpoint really.

Next steps – file download, change monitoring and uploading after edit. The goal is to allow editing in Visual Studio Code and have it auto save back to the App Service on file save.

The Thin Turing Line

This short article is not a panacea of change nor an all-encompassing view of the future, I’d like to talk about some of the interesting areas that I see early movement in a new and exciting direction.

The next couple of years are going to be like no others we’ve seen in the technology space. It goes without saying that technology is moving forward at a pace never seen before. It’s becoming more sophisticated, more intelligent whilst at the same time becoming simpler to use – for end users and developers alike. It’s the fourth industrial revolution – and it’s based around intelligent systems.

Rest assured that bringing forward previous ways of thinking is not going to suffice – incumbents in system design, software architecture and platform choice are no longer so clear. What’s next is murky, but we can get a glimpse.

User Experience and the general movement of defaulting to screen based interfaces are also up for review.

One of the first areas to change will be the way we as people interact with technology. We’ve got these fantastic small and portable devices, with great screens that can do amazing things that we’d have only dreamed of just a handful of years ago – but they are unnatural. They are incumbent as the result of our level of technology in the early 2000’s. There was a gap in personal computing that got filled… and it was amazing… at the time.

Deep interaction – content consumption (like watching video, reading an article) are safe for now. Screens make sense for this kind of big, deep content. It’s structured, unintelligent. The screen in this case is a delivery mecahnism for pre-canned, static(ish) content. You can find a corner and watch it. You’ll probably want to be seated and still.

The biggest changes I see are in with shallow interaction – where a device tops up you with the latest pertinent data and knowledge in real time. You are doing daily things – walking, talking, meeting, discussing, fussing.

The method of augmentation of your daily life – interacting with you as a person – the augmentation of your intelligence – is on the chopping block.

Screens as it were will soon not make the gradeas the default interface choice. People will no longer accept this unnatural interloper in to their most personal inner circles. As far as I’m concerned, screens can go away and come back again when they are projected as heads up displays in my vision, preferable with direct optic nerve stimulation (but I’ll take contact lenses in the interim).

We’ve seen inroads in to this type of technology – although mostly they default back to what we know (screens). Watches that tap and pop when new information arrives.

The next stop on the path to technological wonderment is actually going to be a loop back to the past. We’re coming around to language based interfaces. It’s perhaps the original structured communication platform. It’s built in to all of us by default. It requires a deep intelligence to navigate and operate – the human mind.

The popularity of devices like the Amazon Echo reinforce this statement. People didn’t even know they wanted it. Yet there it is sitting at home – a semi-sentience that lives with you that you can depend upon… an amazing development. But it’s not portable. It’s not very personal either – it’s a group affair.

Imagine for a moment that you had a set of glasses that could lip read your commands and respond to you through bone induction that only you can hear. Such a device could keep you updated with important contextual information that is provided by your personal AI as you need it.

There is a Kickstarter product that is doing just this – minus the lip reading part. The Vue glasses use bone induction to provide information via Bluetooth from your phone You can say commands out loud to them or use an array of buttons and contact points to provide input.

“Jordan – the bus will leave in 8 minutes.” “Jordan, you’re walking past the super market, remember to get milk.” All manner of informational updates could be whispered for only you to hear.

You could of course posit questions. Or (as the Kickstarter video above shows) you could double tap to get the current time.

These scenarios and capabilities are easy to imagine – we have this capability in many ways already. The bone inducting glasses are a new content delivery platform – but in isolation they do not go to that next stage.

To truly transcend in to the next generation of computing we must move to conversational platforms where people forget they are talking to a machine.

We must – and will – skirt the thin Turing line.

More and more products like this will become the norm. Society will expect it as they become more tired and weary of screen based interaction throughout daily life. Having experience a true intelligent system delivered in a meaningful and human-centric way people will not want to go back to the old ways.

Software that is not backed by some form of advanced machine intelligence will seem static and dated very soon.

Once you’ve had intelligence, you can’t go back.

It’s time to reconsider the technologies that we should be investing in next to be ready and waiting for the next phase in personal computing.

Strange PlayReady error 0x8004B823

I’ve been doing some work with PlayReady based streams on Windows UWP and I came across a strange error without much information on how to fix it.

The error code is listed on MSDN as “The requested action cannot be performed because a hardware configuration change has been detected by the Microsoft PlayReady components on your computer.”

The trouble was that the stream was working on other PC’s and was definitely working on this computer (my laptop) – but now it was just erroring out with the above error.

Later on I tried again and it worked! The difference – now I was running Windows in Parallels (my machine is a Macbook). So it works when running in Parallels but not when running Windows on bare metal. Seems my licences became attuned to the Parallels version of “the way my PC looks” and now no longer works in “full metal Windows”.

I have no solution other than to run in Parallels at this time – would be great if I could figure out how to reset the licenses or something!

Strange errors when doing some Windows IOT exploration on Raspberry PI 2

I started my new project – but after a while I started to get some very strange errors.

Severity Code Description Project File Line
Error The .winmd file 'Windows.Devices.DevicesLowLevelContract.winmd' contains type 'Windows.Devices.DevicesLowLevelContract' outside its root namespace 'Windows.Devices.DevicesLowLevelContract'. Make sure that all public types appear under a common root namespace that matches the output file name. XIOTCore_Samples

and

Severity Code Description Project File Line
Error The .winmd file 'Windows.Devices.DevicesLowLevelContract.winmd' contains type 'Windows.Devices.Spi.ISpiConnectionSettings'. The use of the Windows namespace is reserved. XIOTCore_Samples

After a while of mucking around I discovered that in one of my projects I’d used Resharper to resolve and import the required reference for me, and it brought in the wrong thing… I deleted the wrong reference, adding in the proper SDK manually and did a  clean build.
TL;DR you must include the IOT SDK yourself, don’t let Resharper do it for you!

Simple Task Throttling

A while ago Scott Hanselman posted an article where they compare various methods of asynchronous synchronisation. Scott and Stephen Toub came up with a little class called AsyncLock that utilises SemaphoreSlim and IDisposable to make a nice little utility to block more than one thread from accessing a piece of code at a time. In Xamling-Core we extended this so you can have named locks (like a file name for example).

Recently we extended it a little further to allow you to throttle calls, so same idea as the full lock, but you can let through a few at a time.

TaskThrottler.cs

For example, when resizing lots of images you could wrap your code in one of these ThrottleLocks and only four calls would run at a time.

TaskThrottler _getBlock = TaskThrottler.Get(&quot;ImageServiceProcess&quot;, 4);
using (var l = await _getBlock.LockAsync())
{
    var result = await _imageResizeService.ResizeImage(fn, variantFile, size);
    ...
}

Super simple.

Another adaption is allowing you to call the throttler directly to line up a bunch of processing then wait for the result.

There are two versions here – calling processes that return data and those that don’t.

With data returned:

List&lt;Task&lt;SomeEntity&gt;&gt; tasks = new List&lt;Task&lt;SomeEntity&gt;&gt;();

for (var i = 0; i &lt; 500; i++)
{
    var t = TaskThrottler.Get(&quot;Test&quot;, 50).Throttle(_getAnEntity);
    tasks.Add(t);
    Debug.WriteLine(&quot;Added: {0}&quot;, i);
}

await Task.WhenAll(tasks);

...

async Task&lt;SomeEntity&gt; _getAnEntity()
{
    var t = new SomeEntity()
    {
        Count = ++count
    };

    await Task.Delay(1000);

    Debug.WriteLine(&quot;Actual: {0}&quot;, t.Count);

    return t;
}

You can of course call methods that don’t have any return value.

So there you have it, a simple light weight asynchronous throttling utility.

Jordan.

Simple, robust workflow system for mobile (.NET PCL) based apps

We build a lot of mobile apps. And those apps are usually complex. They send stuff to servers, process things in order, handle network connectivity issues, app exit and resume etc. It’s a lot going on, and we found our apps were getting a bit too complex and we were handling the same issues every time we start a new project – there had to be a better way.

Well now there is – we made a super simple workflow called XWorkflow. It’s available as part of the (under construction) MSPL based Xamling-Core.

Xamling-Core on GitHub.
Xamling-Core Nuget Package (Xamarin Unified build coming soon – just finishing testing).
Source code from a talk on Workflow bits – with examples – on GitHub.

More examples are in Xamling-Core test suite (in the Big Windows tests, it’s a bit quicker to work in there, then run on Xamarin later).

Gist showing how to track flows in the UI.

Note on platform support

Most of Xamling-Core is portable, and indeed you can use those bits stand-alone. The actual MVVM and UI components are designed to work on Xamarin Forms, Windows Phone and Windows 8 style apps. At the moment the UI bits are working on Xamarin Forms – iOS and Windows 8 (we call that BigWindows – because soon it will be Windows 10 and yeah, anyway – no more metro…). Anyway, we’re working on Windows Phone and Android MVVM stuff very soon.

What can it do?

By our definition here, a workflow is a bunch of tasks that need to be performed in a certain order, with pass or fail results.

You configure a bunch of stages in order, compile the flow and start firing entities in to it! The core goal of this system is not a fancy feature set – but to be very simple and reliable.

Features of our workflow system are:

  • Resume if app quits unexpectadly (or indeed – expectadly).
  • Configurable retry. Try again if it fails up to your defined count. Tell the user if it fails for ever.
  • UI can restart a flow easily without having to keep track of what actually failed (just say, try again with a button). Great for generic UI’s that don’t know about your underlying model etc.
  • Great for integration with UI (so can easily report progress in some funky UI you make). Receive notifications when flow stages are updated.
  • Wait for network (stages can be configured to require network before starting).
  • Disconnected flow – like a long running server process that sends a push when completed, or UI based stages that need user input.
  • It’s hardy – crashes will not affect it.
  • It’s serialisable – you can send WF state to the server for diagnosis.
  • Stages are pass or fail, and each stage can have friendly UI text for each result.
  • You can merge stages in to other stages – so merge your save item to server flow in to a create new item flow.
  • It’s portable. This will work on any .NET platform that supports Portable Class Libraries. It’s designed for mobile, but we’ve used it all over the place!
  • Much more!

Note on Entities

This workflow system works only with entities in the Xamling-Core entity system. It’s pretty easy – implement the interface IEntity and then call the .Set() extension method on them… they will then be available in the Xamling-Core entity management system – that’s another entire post in itself 🙂

See examples in the sample talk code here for entity manager usage. It’s pretty simple. I’ll do a better post on it another time.

A simple usage of this system might be to upload data to a server and wait for result (even if network doesnt come back until after the next app launch!).

Configuration

The flow stage set up process is fluent.

 
await _hub
      .AddFlow(Workflows.DownloadEntity, &quot;Download&quot;)
      .AddStage(
            EntryDownloadStages.DownloadFromServer,
            &quot;DownloadingTripodEntry&quot;,
            &quot;FailedToDownloadEntry&quot;,
            _entryProcessService.RefreshEntryFromServer,
            false,
            true,
            2
          )
                .Complete();

Here we configure a new flow. This one downloads something from the server (when it calls _entityProcessService.RefreshEntryFromServer.

First you add a new flow giving it a flowId and a friendly name. You can use this flowId later to access it from other places in your code.

 
public XFlow AddFlow(string flowId, string friendlyName)

Next you configure the steps. Give each step a friendly id, a success UI text, fail UI text, the method to run, is disconnected, requires network, number of retries and a special fail function if needed.

We often pass in strings that the UI will use to look up the localised version of the text.

 
 public XFlow AddStage(string stageId, string processingText, string failText, Func&lt;Guid, Task&lt;XStageResult&gt;&gt; function,
            bool isDisconnectedProcess = false, bool requiresNetwork = false, int retries = 0, Func&lt;Guid, Task&lt;XStageResult&gt;&gt; failFunction = null)

You can do other stuff like merge flows

 
.Merge(_hub.GetFlow(Workflows.DownloadTripodForEntity)

Remember to call .Complete() when you’re done!

Our tip is to make your flows small, and merge them in to larger flows. Compose them…

Starting a flow

Getting an entity in to the flow is simple using hte extension method. Of course you can do it the long way (check out EntityDataExtensions.cs)

 
using XamlingCore.Portable.Data.Extensions
...
await thisEntity.StartWorkflow(Workflows.ProcessNotificationMessage);

Off it goes!

As I said the Workflow systems uses the EntityManager system (which is another part of Xamling-Core). The core principal of that system is that once an entity is set, it always gets the same item given the same Id. It’s also serialized to the local storage, so you can get it later even if the app restarts. So the WF system uses only Ids to transport data around… making the entire thing super simple internally.

Call backs to your code look something like this

public async Task RefreshEntryFromServer(Guid entryId)
{
var entry = await _entryEntityManager.Get(entryId);
if (entry == null)
{
return new XStageResult(false, entryId);
}
---

As you can see – it’s up to you to get the entity and ensure it’s good to go.

A successful result would be returned like this

 
return new XStageResult(true, entryId);

You can control a fair bit from those XStageResult returns, you can even change the Id of the entity… just in case the server gave you a new Id when it did it’s bit.

public XStageResult(bool isSuccess, Guid id, string extraText = null, bool completeNow = false, string exception = null)

It obviously say if the operation was successful or not, can provide some more text for the UI, can terminate the flow early (successful result, but terminate now) and provide any exception details – either for the UI or to send up to the server as part of diagnosis (keeping in mind the entire workflow state is serialised).

Work in progress

We’re still just adding in some features and making it better, but so far it’s pretty stable and seems to be working well… It’s being used in the Project Tripod for iPhone code to upload images to the sever, wait for processing, then download the result again after a push notification.

If you need some help getting it going or have any other ideas and feedback please let me know! Remember to check through the examples at the top and to keep an eye out for more posts coming soon.

Jordan.