Replacing lighting programmers with AI

The title is click bait, but I can picture situations where we could replace lighting programmers with artificial intelligence using language understanding techniques that power Alexa, Google Assistance and Cortana. I’m going to discuss how I’m working on this using Microsoft language understanding and intelligence technology.

Console Syntax

Most DMX Lighting control systems have a command line interface which supports a domain specific language for programming lights for theatre shows, concerts, tv and everything in between. The syntax is usually pretty similar across different manufacturers, but there always lies some subtle differences that can trip up experienced users when switching systems.

As I’ve been investigating how to create my language for users to program their shows on my control system, I’ve continually come back to the idea that the lighting industry has standardised what they do but its the tools that offer the variables.

An extreme of this is the theatre world where the lighting designer may call for specific fixtures and exact intensities. They may say something like “one and five at fifty percent”. The lighting programmer will then type in something like 1 + 5 @ 50 Enter on one brand of lighting consoles and perhaps Fixture 1 + 5 @ 5 on another console. The intent is the same but the specific syntax changes.

It’s currently the role of the lighting programmer to understand the intent of the designer and execute that on the hardware in front of them. The best lighting programmers can translate more abstract and complex queries into machine understandable commands using the domain-specific language of the control system their using. They understand the lighting consoles every feature and are master translators. They’re a bridge between the creative and the technical, but they are still fundamentally just translating intents.

Removing the need for human translation

Voice to text would go some way to being able to remove the lighting programmer as for simple commands like the one demonstrated earlier, it’s easy to convert the utterance to an action, but most designers don’t build scenes like this. For more complex commands, the console will likely get it wrong, and with no feedback loop, it won’t have the opportunity to learn from its mistakes like a human.

This is where utilising AI will significantly help. I’m currently working on my console featuring the ability to use machine learning, powered by the cloud so that eventually, even the most complex of requests should be fulfilled by the console alone. While cloud-connected in a lighting console probably seems strange, it is just a stepping stone to a full-offline supported system.

Language Understanding with AI

Let me walk you through the training process as I go about teaching it to understand a range of syntax and utterances. I’ve started this in the blog post from scratch, but in reality, I have a fleshed out version of this with many more commands and supports syntax from a variety of consoles.
The first step is to create a new ‘App’ within the Language Understanding and Intelligence service (LUIS).

NewAPp

By default, our app has no intents as we’re able to build a solution to suit our needs and this isn’t a pre-canned or prebuilt solution. To get started we’ll try and define an intent to effect the Intensity parameter of a fixture.

empty intents

EmptyIntent

We need to provide some examples of what the user might say to trigger this intent. To make this powerful, we want to give a variety of examples. The most natural being something like “1 @ 50”. This contains numeric symbols because that’s what our consoles interfaces provide us with, but if we’re using voice to text solutions, we will get the following response “One at fifty”. To solve this, we need to create some entities so that our AI understands that one is also 1. Thankfully to make developers lives easier, Microsoft provides a whole host of prebuilt entities so we can use their number entity rather than build our own.

entities

Matching to numbers is helpful, but we also need to provide information about other types of lighting specific entities. Below I define a few SourceTypes as the offline syntax for my console follows the grammar rules of ‘Source, Command Destination’.

Creating custom entities

I also provide synonyms which mean if a lighting designer for some crazy reason calls all lighting fixtures “device” then we can arcuately calculate the intent. Synonyms are incredibly powerful when we’re building out language understanding as you’ll see below in the PaletteType entity. I’ve created synonyms for Intensity which allows designers to say things like “Fixture 1 brightness at 50 percent” rather than knowing that the console thinks of brightness as intensities. I’ve also made sure to misspell Colour for Americans…

paletteTypes

Even with just three entities, our intent is more useful than just setting intensities. We can now handle basic fixture manipulation. For example “Group 3 @ Position 5” would work correctly with this configuration. For this reason, I renamed the intent to something more sensible (Programmer.SourceAtDestination).

renamming

Training and testing the AI

Before we can use our AI service, we must train it. The more data, the better but we can get some great results with what we already have.

TrainedApp

Below you can see I passed in the utterance of “fixture 10 @ colour 5”.

testresults

The top scoring intent (we only have one so its a little bit of a cheat) is Programmer.SourceAtDestination. The source type is Fixture and the number 10.

What’s next?

Language and conversation will be used in many technologies that may not be obvious right now. I believe it won’t be long until a lighting control manufacturer releases some form of AI bot or language understanding within their consoles and these get better with every day there used. Maybe I’ll be first, but I can’t believe no one else has seen this technology and not thought how to build it into a control system so perhaps we’ll start to see this as the norm in a few years.
Right now its early days but I’d put money on there being some virual lighting programmers shipping with consoles. What type of personality the virtual programmers have will be down to each manufacturer. I hope that they realise that their virtual programmer needs some personality or it’ll be no more engaging than voice to text. I’ve given some thought to my bots personality, and it stems from real people. I’m hoping to provide both female and male assistance, and they’ll be named Nick and Rachel.

Takeaways

It’s never too early to start investigating how AI can disrupt your industry. This post focuses on the niche that is the lighting control industry, but this level of disruption will be felt across all industries. Get ahead of the curve and learn how you can shape the future of your industry with Microsoft AI services.

 

 

 

Private packages with Azure DevOps

Recently Microsoft announced a rebranding of Visual Studio Team Services (VSTS) to Azure DevOps and as a big fan of Azure, I wanted to check out if the changes were just a new name or if it’d progressed to be a little more welcoming.
I say this because as someone with limited experience using VSTS, I always found it to be a little intimidating so tended to use simpler services like App Center for building my apps and Trello for my Kanban boards. I hoped that the change would include some UI enhancements that could help me ease into DevOps rather than being thrown into the deep end.
Thankfully the team has done some fantastic work in making Azure DevOps easier to get started with and I’ve now adopted it for managing my personal long term project.
In this post, I’m going to discuss how and most importantly why I’ve configured Azure DevOps to allow me to have confidence in the code I’m writing.
In this post, I’m going to discuss how and most importantly why I’ve configured Azure DevOps to allow me to have confidence in the code I’m writing.

One huge solution to rule them!

The project I’m working on is big, or at least it’s going to be massive. Right now its just a minimum viable product and it contains 17 projects, which I originally put into a single Git repository. This worked well for the beginning of the project but as I started to add more and more projects it became difficult to keep things separate.

It’s for this reason that I decided to create two separate solutions to make a clear separation of concerns. Ultimately I’ll probably end up splitting up Lighting Core Solution further as the project develops but for now I think two solutions provides me with enough separation.
Simple Archteicture.png

Smaller Solutions

Having two separate solutions rather than one beast makes my life significantly easier for ensuring that the Lighting Core code doesn’t become too sticky with my UI and vice-versa. It does, however, cause me some difficulties in how I should reference the dependancies as I don’t have an easy way to ensure that the UI project has all the code required to build. To solve this, I went ahead and moved all my code in Azure DevOps as a stepping stone towards fully embracing the tool.

devops1

Private Nuget Feed

With all the code hosted in Azure DevOps I have a one-stop shop for my projects development.

I went ahead and defined build processes and hooked them up so they’d be triggered everytime I pushed code to the master branch.

devops2

The build steps is very simple. I restore packages, build and then pack up the DLLs ready for release.

devops3

I’ve defined separate pack tasks for each project that I wanted to turn into a Nuget package. This task handles packaging up the results from the build ready for releasing either publicly or privately.

devops4

I’ve then defined the most basic release pipeline possible to take the results of the build pipeline and push to Nuget.

devops5

Because I’m releasing the packages privately, I host them in Azure DevOps and can access them in Visual Studio with minimal configuration required!

devops6

Wrapping up

This blog post covers at a very high-level how I’ve gone about setting up the basics of a continuous integration and deployment system for my pet project. If you want to learn how you can also configure your own CI/CD system then checkout the great tutorial over at Microsoft Docs.

 

Continuous delivery of macOS apps built with Swift

Anyone familiar with my ramblings will be aware that I mostly develop in C# using a mixture of Xamarin and .NET Core depending on what I’m building. Earlier this year I took the decision that I’d be serious about learning Swift and got started with building a simple utility app for macOS to help me find training images for some machine learning.

Screen Shot 2018-05-10 at 11.11.55

The app has mostly just sat on Github with little love since I originally published it, so this week I dusted it off (I actually just cloned it again from Github but I like the metaphor) and started implementing a few of the features that didn’t make it to the first release. The most obvious is file tagging, which makes it possible to add file system tags to exported images, for easier discovery of exported images.

Screenshot 2018-09-28 at 10.25.14

I shared a gif of the new build with a colleague, and he loved it so much that he wanted a copy. Now I could have easily have behaved like an animal and built a version on my development machine and sent over the results, but instead, I opted to listen to the sane part of my brain that was calling for me to set up a CI/CD pipeline.

Enter Microsoft’s App Center

If you’re not familiar with App Center then you’re in for a treat! App Center provides a one-stop-shop for services that app developers will likely need. This includes building, testing, distribution, analytics and crash reporting to name a few. Today I’m going to focus on the build aspect, but I’ll cover other features in upcoming posts.

Microsoft has been working hard on adding new features to App Center, and one of those new features in the preview is the ability to build Swift macOS apps. The setup process only requires a few clicks, and we’re up and running. Below a gif of the process recorded in real-time which shows how quickly I managed to get a build setup and running.

Build.gif

App Center Build Setup

To get started we have to create a new app within App Center and specify a name OS and platform as a minimum. In my case, I only really need to worry about selecting macOS as App Center currently only supports Objective-C and Swift as languages for macOS app development. Screenshot 2018-09-28 at 10.33.06.png

Setting up the build pipeline

Once we’ve clicked “Add New App”, we’ll be presented with a screen encouraging us to integrate App Center SDKs into your app. I’ll cover the advantages of this in another post as it’s not needed to use App Center. Did I mention that every feature in App Center is optional? In the post, we’re only going to use the build and distribute functionality and ignore everything else.

Screenshot 2018-09-28 at 10.40.21

Build Configuration

As mentioned earlier in the post, the code is hosted on Github which is integrated with App Center. This allows me to connect App Center to the repository and anytime I push to a branch I can have App Center automatically trigger a build.

Screenshot 2018-09-28 at 10.43.40.png

Once I’ve selected Github I’m presented with a list of all my repositories for me to select which one I wish to link to my App Center App.

Screenshot 2018-09-28 at 10.45.12.png

In this example, the repository only has one branch so I’ll select that puppy and move onto configuration.

Screenshot 2018-09-28 at 10.46.56.png

Screenshot 2018-09-28 at 10.50.24.png

Build Configuration

We want to do a few things with the build configuration. Number one, it has to sign the build for distribution using my Apple Certificates and secondly I want to increment the version number of the app automatically.

Screenshot 2018-09-28 at 10.51.15.png

Signing builds

In order to sign builds for distribution, we’ll need to upload a copy of our .p12 file and a valid provisioning profile.

Screenshot 2018-09-28 at 10.52.31.png

Incrementing build numbers

App Center has native understanding of our projects info.plist file (thanks to the work they did on supporting iOS) so incrementing the build number only requires a few button clicks to configure.

Screen Recording 2018-09-28 at 10.56 am.gif

Distribution

We’re almost finished configuring the build process but we’ve one last step to configure and that’s distribution!

By default, the distribution list is a little lonely as it’ll just be you, but as you find people excited to try your apps you can add them to lists and control what versions of the app they get. For example, you might want your VIPs to get GM access and staff to have access to betas.

Screenshot 2018-09-28 at 10.58.42.png

Adding distribution groups

To setup my VIPs distribution list I head over to the “distribution” beacon on the left hand menu and click “Add Group”.

Screenshot 2018-09-28 at 11.08.00.png

Right now I’ve only one VIP and that’s my colleague Dean but this is enough to demonstrate the functionality. It’s worth noting that I need to pop back to the build configuration to update the distribution to VIPs if I want Dean to get a copy of the builds triggered from Master.

Screenshot 2018-09-28 at 11.13.45.png

Distribution email

And with only a few clicks, my users will now get a nice email with a link to install the latest and greatest builds of my app!

Screenshot 2018-09-28 at 11.17.45.png

Conclusion

App Center is a powerful tool for app developers to streamline their development processes from building, distribution to monitoring after release. I hope this post has helped you understand how easy it can be to set up a CI/CD pipeline for macOS apps developed with Swift 4.0. If you’ve any questions or feedback then please don’t hesitate to reach out.

Consuming Microsoft Cognitive Services with Swift 4

This post is a direct result of a conversation with a colleague in a taxi in Madrid. We were driving to Santiago Bernabéu (the Real Madrid Stadium) to demonstrate to business leaders the power of artificial intelligence.

The conversation was around the ease of use of Cognitive Services for what we call “native native” developers. We refer to those that use Objective-C, Swift or Java as ‘native native’ as frameworks like ReactNative and Xamarin are also native, but we consider these “XPlat Native”. He argued that the lack of Swift SDKs prevented the adoption of our AI services such as our Vision APIs.

I maintained that all Cognitive Service APIs are well documented, and we provide an easy to consume suit of REST APIs, which any Swift developer worth their salt should be able to use with minimal effort.

Putting money where my mouth is

Having made such a statement, it made sense for me to test if my assertion was correct by building a sample app that integrates with Cognitive Services using Swift.

Introducing Bing Image Downloader. A fully native macOS app for downloading images from Bing, developed using Swift 4.

Screen Shot 2018-05-10 at 11.11.55.png

I’ve put the code on Github for you to download and play with if you’re interested in using Cognitive Services within your Swift apps, but I’ll also explain below how I went about building the app.

Where the magic happens

In the interest of good development practices, I started by creating a Protocol (C# developers should think of these as Interfaces) to define what functions the ImageSearch class will implement.

Protocol

protocol ImageServiceProtocol {
// We will take the results and add them to hard-coded singleton class called AppData. 
func searchForImageTerm(searchTerm : String)

// We pass in a completion handler for processing the results of this func
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ())
}

Two Implementations for one problem

I’ve made sure to include two implementations to give you options on how you’d want to interact with Cognitive Services. The approach used in the App makes use of the Singleton class for storing AppData as well as using Alamofire for handling network requests. We’ll look at this approach first.

search For Image Term

This is the public func, which is easiest to consume.

func searchForImageTerm(searchTerm : String) {

    //Search for images and add each result to AppData
    DispatchQueue.global.(qos: .background).async {
        let totalPics = 100
        let picsPerPage = 50 
        let numPages = totalPics / picsPerPage 
        (0 ..< numPages)             
            .compactMap { self.createUrlRequest(searchTerm: searchTerm, pageOffset: $0 }             
            .foreach{ self.fetchRequest(request: $0 as NSURLRequest) }         
        .RunLoop.current.run()     } 
} 

create Url Request

private func createUrlRequest(searchTerm : String, pageOffset : Int) -> URLRequest {

    let encodedQuery = searchTerm.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)!
    let endPointUrl = "https://api.cognitive.microsoft.com/bing/v7.0/images/search"

    let mkt = "en-us"
    let imageType = "photo"
    let size = "medium" 

    // We should move these variables to app settings
    let imageCount = 100
    let pageCount = 2
    let picsPerPage = totalPics / picsPerPage 

    let url = URL(string: "\(endPointUrl)?q=\(encodedQuery)&count=\(picsPerPage)&offset=\(pageOffset * picsPerPage)&mkt=\(mkt)&imageType=\(imageType)&size=\(size)")!
        
    var request = URLRequest(url: url)
    request.setValue(apiKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
        
    return request
}

fetch Request

This is where we attempt to fetch and parse the response from Bing. If we detect an error, we log it (I’m using SwiftBeaver for logging).

If the response contains data we can decode, we’ll loop through and add each result to our AppData singleton instance.

private func fetchRequest(request : NSURLRequest){
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request as URLRequest){ (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        self.log.error("Fetch Request returned no data : \(request.url?.absoluteString)")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
        (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        let imagesToAdd = bingImageSearchResults.images.filter { $0.encodingFormat != EncodingFormat.unknown }
            AppData.shared.addImages(imagesToAdd)            
        } catch {
            self.log.error("Error decoding ImageResultWrapper : \(error)")
            self.log.debug("Corrupted Base64 Data: \(data.base64EncodedString())")
        }     
     }
        
     //Tasks are created in a paused state. We want to resume to start the fetch.
     task.resume()
}   

Option two (with no 3rd party dependancies)

As a .NET developer, the next approach threw me for a while and took a little bit of reading about Closures to fully grasp. With this approach, I wanted to return an Array of ImageSearchResult type, but this proved not to be the best approach. Instead, I would need to pass in a function that can handle the array of results instead.

// Search for images with a completion handler for processing the result array
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ()) {
        
    //Because Cognitive Services requires a subscription key, we need to create a URLRequest to pass into the dataTask method of a URLSession instance..
    let request = createUrlRequest(searchTerm: searchTerm, pageOffset: 0)
       
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request, completionHandler: { (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        print("something is wrong with the fetch")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
    (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        completion([ImageSearchResult]())
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        //We use a closure to pass back our results.
        completion(bingImageSearchResults.images)
                
    } catch { self.log.error("Decoding ImageResultWrapper \(error)") }
    })
    task.resume()
}

Wrapping Up

You can find the full project on my Github page which contains everything you need to build your own copy of this app (maybe for iOS rather than macOS?).

If you have any questions, then please don’t hesitate to comment or email me!

 

App Services Custom Domain, SSL & DNS

We’ve all seen tutorials which demonstrate how to deploy a simple todo list backend to Azure but how many have you read that go onto secure it? In this post, I’m going to cover how I’m securing the Bait News v2 backend infrastructure as well as covering how to configure custom domains.

Why bother?

Apple announced in 2015 that Apps and their corresponding backend servers would need to support App Transport Security (ATS).

ATS was introduced with iOS 9 as a security enhancement to ensure all connections by apps use HTTPs. Initially slated to go into effect for all new app store submissions from January 2017, it has since been postponed with no update on when it’ll be coming into effect. Although the requirement has been delayed, it’s still something that all app developers should be implementing as it provides our users with added security, making man in the middle attacks impossible to go unnoticed.

Historically, you’ll see most developers (including myself), opt to turn ATS off to make our lives easier. Some will take a lighter touch and only disable ATS for a single domain (they’re backend) which is not much more secure than turning ATS off altogether. Either approach opens up your users and data to attack and should be avoided.

So what do we need to do to secure our app? Lets first register a domain for our backend.

Custom Domains

DNS

I’ve been using 123-Reg as my domain registrar for 10 years and continue to use them as I migrate my websites to Azure. Most domain registrars will also provide some basic DNS functionality but you would normally want to use a 3rd party DNS Service for more advance situations. In my case, I’m using 123-Regs DNS service and have added a number of CNAMEs pointing to Azure.

Adding records

Below you see the minimum required records needed to enable my custom domain.

Permanent Records

Screen Shot 2017-09-02 at 15.20.53

Temporary Records

Screen Shot 2017-09-02 at 15.32.15

To get started, I have added an A record pointing to the App Service instance using its IP address. You can find your App Service IP address by going into it’s Custom Domain blade within the Azure portal.

Once you’ve added the A record, you can then create the CNAME which will map www requests to your backends url. You can find your destination in the Overview blade of the App Service.

Verify Domain Ownership

Azure needs to know I own the domain I’m trying to map. To prove this, I’ll add two records to my DNS settings which are the temporary records listed above.

Once I’ve added the verify CNAME records, I can save and sit tight. DNS Records need to propagate across the globe, which can take up to 24 hours.

This is end result of what my DNS setting configuration looked like. I also created some CNAMEs to redirect traffic from subdomains to other Azure services.

Screen Shot 2017-08-17 at 11.50.36

Portal Configuration

To finish off, I need to configure the App Service Custom Domain settings.

Screen Shot 2017-09-02 at 15.53.03.png

Hit the ‘Add Hostname’ button and enter the custom domain.

Screen Shot 2017-09-02 at 15.54.03

After hitting Validate, Azure will check the DNS records to confirm the domain exists and that you own it. You should see something like this.

Screen Shot 2017-09-02 at 15.55.53

Hitting ‘Add hostname’ will complete the process of configuring a custom domain for your App Service. If you’re deploying a mobile backend, you may want to create CNAME record which maps api.domain.net to your mobile backend and whilst keeping www.domain.net mapped to a ASP.NET website.

Adding Security

SSL Certificates

As mentioned at the start of this post, enabling HTTPS prevents MITM attacks and ensures your communication between server and client is secure. Its pretty straight forward to enable within App Services but much like DNS, it can take a while (but this time its human factors rather than waiting for computers to sync up).

First things first, You’ll need to purchase a certificate.  I opted to 123-Reg as they provide a few options to meet most users requirements and its integration with my domain management make it a no brainer to use.

I should admit that I did make a mistake when I first purchase a certificate, which caused a few days of delays, so its important to double check the type of certificate you’re purchasing. I had purchased a certificate which was for only www.baitnews.io. This mean that my mobile api of api.baitnews.io couldn’t use the certificate. 123-Reg refunded the first certificate and I tried again, but this time making sure to purchase a certificate which support unlimited subdomains. You can see below the original certificate has been revoked and the new certificate supports wildcards.

When you apply for a certificate, you’ll be provided a download which includes your certificate request (CSR) in a PEM format. You also get the private key which you’ll use later to create a new certificate.

Screen Shot 2017-09-02 at 16.13.32

Once you’ve been issued the certificate, you’re ready to create a new certificate which you’ll use in Azure for everything. This is a pretty easy process as we can use OpenSSL on almost any platform. I’m on a Mac but this works the same on both Windows and Linux.

openssl pkcs12 -export -out baitnews.pfx -inkey
/Users/michaeljames/Downloads/SSL-CSR5/private-key.key -in
/Users/michaeljames/Desktop/wildcard.cert

cert

Variables

  • [Output file name] -| What do you want to call the certificate?
  • [private-key.key path] The location of the private key. This would have been provided when requesting the certificate.
  • [wildcard.cert path] The location of the freshly issued certificate.

Once you press enter, you’ll need to type in a couple of passwords and then you’ll be set. It’ll look something like this:

Screen Shot 2017-09-02 at 16.38.43

You now have your certificate ready for uploading to Azure. The conversion of certificates isn’t the easiest of procedures to wrap your head around on the first few goes. If you’re worried about this step then keep in mind you can purchase SSL certificates through the Azure Portal, which skips many of the above steps! It does however add a small premium to the cost involved in securing your backend as you’ll find the certificate a little more expensive but your also required to store it in KeyVault.

Binding Domains with Certificates

Lets upload our new certificate to our App Service. To do this, head over to SSL Certificate blade and hit ‘Upload Certificate’. You’ll need to provide the password used to create the certificate.

Screen Shot 2017-09-07 at 12.26.17.png

If successful, you’ll see your certificate as been imported and is ready to use with your custom domains.

Screen Shot 2017-09-07 at 12.29.33.png

Add Binding

The last step is to bind out SSL certificate with our custom domain. Clicking ‘Add Binding’ will allow you to select both the custom domain and SSL from a drop down.

Screen Shot 2017-09-07 at 12.29.40

Hitting Add Binding will finish the process. You now have a custom domain mapped your App Service instance that supports HTTPS. Any users visiting your backend will be greeted  with the familiar green padlock in the address bar.

Screen Shot 2017-09-07 at 12.31.59.png

Wrapping Up

Adding custom domains and enabling secure connectivity between your mobile app and backend is extremely simple and theres no good reason not to enable it (unless you’re hacking on a demo or POC).

In the next post I’m going to cover how to expand our setup to to route traffic to the nearest App Service instance.

Creating a simple Azure backend POC for your mobile app

Most mobile apps require some form of infrastructure to function correctly. In the case of something like Instagram, they’ll likely have some blob storage for storing images and then a SQL database for storing user information like comments and likes. They’ll have a REST API which the mobile app uses to interact with these services rather than having a direct connection to each service within the backend.

Within the context of Azure, we would usually opt to use Azure App Service as our middleware/orchestration layer. Our mobile apps will connect to this layer and we can deal with ensuring the users have the correct permissions to access the data in our storage services.

In this video, I show how I created a proof of concept backend for Bait News. I had an Excel spreadsheet which I wanted to host in the cloud and make avaiaible to all my mobile users. To do this, I used Azure App Service Easy Tables. Watch below to find out more:

Creating a 5 star search experience

Search is a feature that can make or break your mobile app, but it can be incredibly difficult to get right. In this blog post, I’m going to share how I’m solving search with Beer Drinkin.

There are many options for us developers looking to implement search solutions into our projects. Some of us may decide to use Linq and Entity Framework to look through a table, and the more adventurous may opt to create an instance of Elastic Search, which requires a lot of work to set up and maintain. For Beer Drinkin, I’m using Microsoft’s Azure Search service as it has proved to be easy to configure and requires zero maintenance.

The reason that Beer Drinkin uses Azure Search is simple: the BreweryDB search functionality is too limited for my needs. One example of this is that the end point often returns zero results if the user misspells a search term. If I searched for “Duval” rather than “Duvel,” BreweryDB’s search would return zero beers. Even if I were to spell the search term correctly, BreweryDB would return all beers from the Duvel Moortgat brewery. Although this is minor, I would prefer that Maredsous 6 and Vedett Extra White not be returned as these do not have “Duvel” in the name.

Screen Shot 2015-12-21 at 15.36.14

Spelling Mistakes

Another issue with using the default search functionality of BreweryDB is its inability to deal with spelling mistakes or offer suggestions. Simple off-by-one-letter spelling mistakes yield no results, something that should be easy to resolve.

Screen Shot 2015-12-21 at 15.40.40

I’ve had sleepless nights worrying that on release, users fail to find results due to simple spelling mistakes. One way to address spelling mistakes is to utilize a spell checking service like WebSpellChecker.net.

The issue with a service such as WebSpellChecker is that it has no context in which to make corrections when it comes to names of products and it also doesn’t support multiple languages.

Another way to minimize spelling mistakes is to provide a list of suggestions as the user types in a search query. You’re probably familiar with this in search engines like Google and Bing. This approach to searching is intuitive to users and significantly reduces the number of spelling mistakes.

Enter Azure Search

Azure Search aims to remove the complexity of providing advanced search functionality by offering a service that does the heavy lifting for implementing a modern and feature-rich search solution. Microsoft handles all the infrastructure required to scale as it gains more users and indexes more data. Not to mention that Azure Search supports 50 languages, which use technologies from multiple teams within Microsoft (such as Office and Bing). What this equates to is Azure Search understands the languages and words of the search requests.

Some of my favorite features

Fuzzy search – Find strings that match a pattern approximately.

Proximity search – Geospatial queries. Find search targets within a certain distance of a particular point.

Term boosting –  Boosting allows me to promote results based on rules I create. One example might be to boost old stock or discounted items.

Getting Started

The first step I took was to provision an Azure Search service within the Azure Portal. I had two options for setting up the service; I could have opted for a free tier or have paid for dedicated resources. The free tier offers up to 10,000 documents and 50MB storage, which is a little limited for what I need.

Because my index already contains over 50,000 beers, I had no option but to opt for the Standard S1 service, which comes at a cool $250 per month (for Europeans, that’s €211). With the fee comes a lot more power with the use of dedicated resources, and I’m able to store 25GB of data. When paying for Search, you’ll be able to scale out to 36 units, which provides plenty of room to grow.

Creating an index

Before I could take advantage of Azure Search, I needed to upload my data to be indexed. Fortunately, with the .NET SDK the Azure Search team provides, it’s exceptionally easy to interact with the service. Using the .NET library I wrote a few weeks ago, which calls BreweryDB, I was able to iterate quickly through each page of beer results and upload them in blocks to the search service.

Screen Shot 2016-01-04 at 10.18.02.png

Uploading documents

[sourcecode language=”csharp”]
Parallel.For(1, totalPageCount, new ParallelOptions {MaxDegreeOfParallelism = 25}, index =&amp;amp;amp;amp;amp;amp;gt;
{
var response = client.Beers.GetAll(index).Result;
var beersToAdd = new List();
foreach (var beer in response.Data)
{
var indexedBeer = new IndexedBeer
{
Id = beer.Id,
Name = beer.Name,
Description = beer.Description,
BreweryDbId = beer.Id,
BreweryId = beer?.Breweries?.FirstOrDefault()?.Id,
BreweryName = beer?.Breweries?.FirstOrDefault()?.Name,
AvailableId = beer.AvailableId.ToString(),
GlassId = beer.GlasswareId.ToString(),
Abv = beer.Abv
};

if (beer.Labels != null)
{
indexedBeer.Images = new[] {beer.Labels.Icon, beer.Labels.Medium, beer.Labels.Large};
}
beersToAdd.Add(indexedBeer);
}
processedPageCount++;
indexClient.Documents.Index(IndexBatch.Create(beersToAdd.ToArray().Select(IndexAction.Create)));

Console.Write( $”\rAdded {beersToAdd.Count} beers to Index | Page {processedPageCount} of {totalPageCount}”);
});
[/sourcecode]

Other data import methods

Azure Search also supports the ability to index data stored in Azure SQL or DocumentDB, which enables you to point a crawler to your SQL table and ensures it is always up to date rather than requiring you to manually manage the document index yourself. There are a few reasons you may not want to use a crawler. The best reason for not using a crawler is that it introduces the possibility of a delay between your DB changing and your search index reflecting the changes. The crawler will only crawl on a schedule, which results in an out-of-data index.

If you opt for the self-managed approach, you can add, remove, and edit your indexed documents yourself as the changes happen in your back end. This provides you with live search results as you know the data is always up to date. Using the crawler is an excellent way to get started with search and quickly get some data in place, but I wouldn’t consider it a good strategy for long-term use.

I mentioned earlier that the free tier is limited to 10,000 documents, which translates to 10,000 rows in a table. If your table has more than 10,000 rows, then you’ll need to purchase the Standard S1 tier.

Suggestions

Before we can use suggestions, we’ll need to ensure that we’ve created a suggester within Azure.

Screen Shot 2016-01-04 at 10.27.15.png

In the current service release, there is support for limited index schema updates. Any schema updates that would require re-indexing, such as changing field types, are not currently supported. Although existing fields cannot be modified or deleted, new fields can be added to an existing index at any time.

If you’ve not checked the suggester checkbox at the time of creating a field, then you’ll need to create a secondary field as Azure Search doesn’t currently support editing the fields. The Azure Search team recommends that you create new fields if you require a change in functionality.

The simplest way to get suggestions would use the following API.

[sourcecode language=”csharp”]
var response = await indexClient.Documents.SuggestAsync(searchBar.Text, “nameSuggester”);
foreach(var r in response)
{
Console.WriteLine(r.Text);
}
[/sourcecode]

Having fun with the suggestion API

The API suggestion provides properties for enabling fuzzing matching and hit highlighting. Let’s see how we might enable that functionality within our app.

[sourcecode language=”csharp”]
var suggestParameters = new SuggestParameters();
suggestParameters.UseFuzzyMatching = true;
suggestParameters.Top = 25;
suggestParameters.HighlightPreTag = “[“;
suggestParameters.HighlightPostTag = “]”;
suggestParameters.MinimumCoverage = 100;
[/sourcecode]

What do the properties do?

UseFuzzyMatching – The query will find suggestions even if there’s a substituted or missing character in the search text. While this provides a better search experiance, it comes at the cost of slower operations and consumes more resources.

Top – Number of suggestions to retreive. It must be a number between 1 and 100, with its default to to 5.

HightlightPreTag – Gets or sets the tag that is prepended to hit highlights. It MUST be set with a post tag.

HightlightPostTag – Gets or sets the tag that is prepended to hit highlights. It MUST be set with a pre tag.

MinimumCoverage – Represents the precentage of the index that must be covered by a suggestion query in order for the query to be reported a sucess. The default is 80%.

How do the results look?

Simulator Screen Shot 4 Jan 2016, 10.39.25.png

Search

The Search API itself is even easier (assuming we don’t use filtering, which is a topic for another day).

[sourcecode language=”csharp”]

var searchParameters&amp;amp;amp;amp;amp;nbsp;= new SearchParameters() { SearchMode = SearchMode.All };

indexClient.Documents.SearchAsync(searchBar.Text, searchParameters);
[/sourcecode]

Special Thanks

I’d like to take a moment to thank Janusz Lembicz for helping me get started with Azure Search suggestions by answering all my questions. I  appreciate your support (especially given it was on a weekend!).

Facebook Authentication with Azure

Important Note

Microsoft have recently released Azure App Services which aims to replace Mobile Services. I will be writing an updated guide shortly.


 

Azure Mobile Services (AMS) is a great platform for .NET developers to build complex apps that require a backend in almost no time and at very competitive prices. I’ve been using it for about 6 months to develop a beer tracking app called BeerDrinkin.

One of the requirements of BeerDrinkin was its ability to sync account data accross all the users devices and AMS makes this incredibly easy with offline sync.

I couldn’t help but feel that if I’m going to be storing data in Azure, showing what beers people love and hate, then I really should be doing something more interesting than simply syncing to a handful of devices. This is why I decided to try and build a beer recommendation engine into BeerDrinkins backend. The aim is to make use of Azure Machine Learning to make suggestions about what beers you might like based on your constumption history and those of your peers.

In order to build a users profile to feed into machine learning, I needed more information than the simple GUID that AMS returns when calling the FacebookLoginProvier within the ConfigOptions of a Mobile Service WebApiConfig class.

The information that I wanted to have about the user:

  • Email Address
  • First Name
  • Last Name
  • Gender
  • Date of Birth

I will be adding additional data about users at various points during the users interaction with the app. One example is location data and I even plan on recroding local weather conditions for each beer the user checks in. With this, I can use machine learning to predict beers based on current weather conditions. This is important as on a warm day, most people will likely want a lager rather than a thick, blood warming Belgian double.

Creating a custom LoginProvider

To fetch the extra information, I needed to do a couple of things. First things first, I needed to remove the default FacebookLoginProvider from my ConfigOptions. To do this I called the following:

[sourcecode language=”csharp”]

options.LoginProviders.Remove(typeof(FacebookLoginProvider));
[/sourcecode]

I then went ahead and created a new class which I named CustomFacebookLoginProvider which importantly overrides the CreateCentials method.

[sourcecode language=”csharp”]
public class CustomFacebookLoginProvider : FacebookLoginProvider
{
public CustomFacebookLoginProvider(HttpConfiguration config, IServiceTokenHandler tokenHandler)
: base(config, tokenHandler)
{
}

public override ProviderCredentials CreateCredentials(ClaimsIdentity claimsIdentity)
{
var accessToken = string.Empty;
var emailAddress = string.Empty;
foreach (var claim in claimsIdentity.Claims)
{
if (claim.Type == “Zumo:ProviderAccessToken”)
{
accessToken = claim.Value;
}

if (claim.Type == “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress”)
{
emailAddress = claim.Value;
}
}
}
}
[/sourcecode]

I now had some basic information regarding the user, such as their Facebook token (which is essential for gathering more information) and their email address. I then use a 3rd party Nuget package to query Facebook’s opengraph for more information regarding the user.

The entire method in BeerDrinkin’s Azure backend looks something like this:

[sourcecode language=”csharp”]
public class CustomFacebookLoginProvider : FacebookLoginProvider
{
public CustomFacebookLoginProvider(HttpConfiguration config, IServiceTokenHandler tokenHandler)
: base(config, tokenHandler)
{
}

public override ProviderCredentials CreateCredentials(ClaimsIdentity claimsIdentity)
{
var accessToken = string.Empty;
var emailAddress = string.Empty;
foreach (var claim in claimsIdentity.Claims)
{
if (claim.Type == “Zumo:ProviderAccessToken”)
{
accessToken = claim.Value;
}

if (claim.Type == “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddres”)
{
emailAddress = claim.Value;
}
}

if (string.IsNullOrEmpty(accessToken))
return null;

var client = new FacebookClient(accessToken);
dynamic user = client.Get(“me”);

DateTime dateOfBirth;
DateTime.TryParse(user.birthday, out dateOfBirth);

//Keeping userItem for the moment but may well kill it. I was going to seperate userItem into public info and accountItem into public
var userItem = new UserItem
{
Id = user.id,
};

var accountItem = new AccountItem
{
Id = userItem.Id,
Email = emailAddress,
FirstName = user.first_name,
LastName = user.last_name,
IsMale = user.gender == “male”,
DateOfBirth = dateOfBirth,
AvatarUrl = $”https://graph.facebook.com/{userItem.Id}/picture?type=large”
};

var context = new BeerDrinkinContext();
if (context.UserItems.FirstOrDefault(x => x.Id == userItem.Id) != null)
return base.CreateCredentials(claimsIdentity);

context.AccountItems.Add(accountItem);
context.UserItems.Add(userItem);
context.SaveChanges();

return base.CreateCredentials(claimsIdentity);
}
}
[/sourcecode]

Using the CustomFacebookLoginProvider

In order to use my implementation of the login provider, I needed to go ahead and add it to the ConfigOptions.

[sourcecode language=”csharp”]
options.LoginProviders.Add(typeof(CustomFacebookLoginProvider));
[/sourcecode]

Scopes

One thing I forgot when I first implemented this approach was ensuring that I updated my scopes. By default, Facebook wont provide me with the information I require. In order to have Azure Mobile Service pass the correct request to Facebook, I needed to log into my Azure management portal and add MS_FacebookScope to my App Settings. The exact scope I’m requesting is email user_birthday user_friends.

Disclaimer

BeerDrinkin (espically the backend) is worked on mostly whilst ‘testing’ the app (drinking beer). Some of the code is horrible and need to be refactored. The above code could 100% do with a tidy up but works so I’ve left it as is. The project is on GitHub.so please do contribute if you can.