Consuming Microsoft Cognitive Services with Swift 4

This post is a direct result of a conversation with a colleague in a taxi in Madrid. We were driving to Santiago Bernabéu (the Real Madrid Stadium) to demonstrate to business leaders the power of artificial intelligence.

The conversation was around the ease of use of Cognitive Services for what we call “native native” developers. We refer to those that use Objective-C, Swift or Java as ‘native native’ as frameworks like ReactNative and Xamarin are also native, but we consider these “XPlat Native”. He argued that the lack of Swift SDKs prevented the adoption of our AI services such as our Vision APIs.

I maintained that all Cognitive Service APIs are well documented, and we provide an easy to consume suit of REST APIs, which any Swift developer worth their salt should be able to use with minimal effort.

Putting money where my mouth is

Having made such a statement, it made sense for me to test if my assertion was correct by building a sample app that integrates with Cognitive Services using Swift.

Introducing Bing Image Downloader. A fully native macOS app for downloading images from Bing, developed using Swift 4.

Screen Shot 2018-05-10 at 11.11.55.png

I’ve put the code on Github for you to download and play with if you’re interested in using Cognitive Services within your Swift apps, but I’ll also explain below how I went about building the app.

Where the magic happens

In the interest of good development practices, I started by creating a Protocol (C# developers should think of these as Interfaces) to define what functions the ImageSearch class will implement.

Protocol

protocol ImageServiceProtocol {
// We will take the results and add them to hard-coded singleton class called AppData. 
func searchForImageTerm(searchTerm : String)

// We pass in a completion handler for processing the results of this func
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ())
}

Two Implementations for one problem

I’ve made sure to include two implementations to give you options on how you’d want to interact with Cognitive Services. The approach used in the App makes use of the Singleton class for storing AppData as well as using Alamofire for handling network requests. We’ll look at this approach first.

search For Image Term

This is the public func, which is easiest to consume.

func searchForImageTerm(searchTerm : String) {

    //Search for images and add each result to AppData
    DispatchQueue.global.(qos: .background).async {
        let totalPics = 100
        let picsPerPage = 50 
        let numPages = totalPics / picsPerPage 
        (0 ..< numPages)             
            .compactMap { self.createUrlRequest(searchTerm: searchTerm, pageOffset: $0 }             
            .foreach{ self.fetchRequest(request: $0 as NSURLRequest) }         
        .RunLoop.current.run()     } 
} 

create Url Request

private func createUrlRequest(searchTerm : String, pageOffset : Int) -> URLRequest {

    let encodedQuery = searchTerm.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)!
    let endPointUrl = "https://api.cognitive.microsoft.com/bing/v7.0/images/search"

    let mkt = "en-us"
    let imageType = "photo"
    let size = "medium" 

    // We should move these variables to app settings
    let imageCount = 100
    let pageCount = 2
    let picsPerPage = totalPics / picsPerPage 

    let url = URL(string: "\(endPointUrl)?q=\(encodedQuery)&count=\(picsPerPage)&offset=\(pageOffset * picsPerPage)&mkt=\(mkt)&imageType=\(imageType)&size=\(size)")!
        
    var request = URLRequest(url: url)
    request.setValue(apiKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
        
    return request
}

fetch Request

This is where we attempt to fetch and parse the response from Bing. If we detect an error, we log it (I’m using SwiftBeaver for logging).

If the response contains data we can decode, we’ll loop through and add each result to our AppData singleton instance.

private func fetchRequest(request : NSURLRequest){
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request as URLRequest){ (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        self.log.error("Fetch Request returned no data : \(request.url?.absoluteString)")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
        (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        let imagesToAdd = bingImageSearchResults.images.filter { $0.encodingFormat != EncodingFormat.unknown }
            AppData.shared.addImages(imagesToAdd)            
        } catch {
            self.log.error("Error decoding ImageResultWrapper : \(error)")
            self.log.debug("Corrupted Base64 Data: \(data.base64EncodedString())")
        }     
     }
        
     //Tasks are created in a paused state. We want to resume to start the fetch.
     task.resume()
}   

Option two (with no 3rd party dependancies)

As a .NET developer, the next approach threw me for a while and took a little bit of reading about Closures to fully grasp. With this approach, I wanted to return an Array of ImageSearchResult type, but this proved not to be the best approach. Instead, I would need to pass in a function that can handle the array of results instead.

// Search for images with a completion handler for processing the result array
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ()) {
        
    //Because Cognitive Services requires a subscription key, we need to create a URLRequest to pass into the dataTask method of a URLSession instance..
    let request = createUrlRequest(searchTerm: searchTerm, pageOffset: 0)
       
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request, completionHandler: { (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        print("something is wrong with the fetch")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
    (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        completion([ImageSearchResult]())
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        //We use a closure to pass back our results.
        completion(bingImageSearchResults.images)
                
    } catch { self.log.error("Decoding ImageResultWrapper \(error)") }
    })
    task.resume()
}

Wrapping Up

You can find the full project on my Github page which contains everything you need to build your own copy of this app (maybe for iOS rather than macOS?).

If you have any questions, then please don’t hesitate to comment or email me!

 

Updated Resilient Networking with Xamarin

Rob Gibbons wrote a fantastic blog post back in 2015 on how best to write network requests layers for your Xamarin Apps. I’ve personally used this approach many times, but I felt that it needed updating for 2018, so here it is. A slightly updated approach to resilient networking services with Xamarin. And when I say ‘slightly update’, I honestly mean it’s a minor change!

we-dont-throw

Refit

For those of you who are familiar with Rob’s approach, he uses pulls together a few libraries to create a robust networking layer. One of the critical elements of his strategy is the use of Refit. Refit is a REST library which allows us to interact with remote APIs with minimal boiler-plate code. It makes heavy use of generics and abstractions to define our REST API calls as a C# Interfaces which are then used with am instance HTTPClient to handle all the requests. All serialisation is dealt with for us! I still believe Refit to be a great library to use so we’ll keep this as the core of this pattern.

Let’s have a look at an example interface for use with Refit.

public interface IBeerServiceAPI`
{
    [Get("/beer/")]
    Task GetBeers();
}

We use attributes to define the request type as well as its path (relative to the HTTPClients base URL).

We then define what we expect back from the API and leave Refit to handle making the call, deserialising the response and handing it back to us as a concrete type.

To expand on this, we can add many more types of requests.

[Get("/beer/{id}/")]
Task GetBeerById(string id);

[Post("/beer/")]
Task CreateBeer([Body] Beer beer);

[Delete("/beer/{id}/")]
Task DeleteBeer(string id);

[Put("/beer/{id}/")]
Task UpdateBeer(string id, [Body] Beer beer);

We can now use the interface to make calls to our remote endpoint. I usually place these methods within a class that is unique to the service I’m calling. I’m this example; it’d be a “BeersService.”

//Create new beer item
public async Task<Beer> CreateBeerAsync(Beer beer)
 {
    var apiInstance = RestService.For<IBeerServiceAPI>(Helpers.Constants.BaseUrl);
    return await apiInstance.CreateBeer(beer);
}

//Get by ID
public async Task<Beer> GetBeerByIdAsync(string id)
{
    var apiInstance = RestService.For<IBeerServiceAPI>(Helpers.Constants.BaseUrl);
    return await apiInstance.GetBeerById(id);
}

That’s all it takes for us to starts interacting with a remote API. If you’re wondering how to test this, it’s incredibly easy to swap out implementations with mock services when using this architecture!

Resiliency

Building a resilient networking service requires a few things. We need to understand what our current connectivity looks like, as well as find a solution for caching data locally to ensure our app still ‘works’ in offline situations.

We can achieve both of these tasks by leveraging packages from Motz. He’s created a plugin for checking connectivity status as well as developed a library for caching.

Lets first take a look at connectivity status.

You’ll want to add the Connectivity Plugin nuget package to every client project in the solution as well as the PCL. The following platforms are supported:

  • Xamarin.iOS
  • tvOS (Xamarin)
  • Xamarin.Android
  • Windows 10 UWP
  • Xamarin.Mac
  • .NET 4.5/WPF
  • .NET Core
  • Samsung Tizen

To use the connectivity plugin, we can simple make the following call:

var isConnected = CrossConnectivity.Current.IsConnected;

Caching

Now that we can check for connectivity, we detect that we’re offline. Let’s have a look at how to implement that.

public async Task<List<Beer>> GetBeersAsync()
{
    Handle online/offline scenario
    if (!CrossConnectivity.Current.IsConnected)
    {
        //If no connectivity, we need to fail... :(
        throw new Exception("No connectivity");
    }
    //Create an instance of the Refit RestService for the beer interface.
    var apiInstance = RestService.For<IBeerServiceAPI>(Helpers.Constants.BaseUrl);
    var beers = await apiInstance.GetBeers());

    return beers;
}

Returning no results for most requests isn’t a great solution. We can dramatically improve the user experience by keeping a cache of data to show in offline situations. To implement that, we’re going to use Monkey Cache. To use Monkey Cache, we have to first configure the ApplicationId.  A folder created for your app on disk with the ApplicationId, so you should avoid changing it.

Barrel.ApplicationId = "your_unique_name_here";

Adding Monkey Cache is super simple. First, of, we want to define a key. Think of this as the collection (barrel) name. After that, we implement the necessary logic to handle caching.

public async Task<List<Beer>> GetBeersAsync()
{
    var key = "Beers";

    Handle online/offline scenario
    if (!CrossConnectivity.Current.IsConnected && Barrel.Current.Exists(key))
    {
        //If no connectivity, we'll return the cached beers list.
        return Barrel.Current.Get<List<Beer>>(key);
    }

    //If the data isn't too old, we'll go ahead and return it rather than call the backend again.
    if (!Barrel.Current.IsExpired(key) && Barrel.Current.Exists(key))
    {
        return Barrel.Current.Get<List<Beer>>(key);
    }            

    //Create an instance of the Refit RestService for the beer interface.
    var apiInstance = RestService.For<IBeerServiceAPI>(Helpers.Constants.BaseUrl);
    var beers = await apiInstance.GetBeers());

    //Save beers into the cache
    Barrel.Current.Add(key: key, data: beers, expireIn: TimeSpan.FromHours(5));

    return beers;
}

Polly

Returning to Rob’s original post, we’ll want to add Polly. Polly helps us handle network requests sanely. It allows us to retry, and process failures robustly.

We’re going to use Polly to define a retry logic that forces the service to retry five times, each time waiting twice as long as before.

public async Task<List<Beer>> GetBeersAsync()
{
    var key = "Beers";

    Handle online/offline scenario
    if (!CrossConnectivity.Current.IsConnected && Barrel.Current.Exists(key))
    {
        //If no connectivity, we'll return the cached beers list.
        return Barrel.Current.Get<List<Beer>>(key);
    }

    //If the data isn't too old, we'll go ahead and return it rather than call the backend again.
    if (!Barrel.Current.IsExpired(key) && Barrel.Current.Exists(key))
    {
        return Barrel.Current.Get<List<Beer>>(key);
    }            

    //Create an instance of the Refit RestService for the beer interface.
    var apiInstance = RestService.For<IBeerServiceAPI>(Helpers.Constants.BaseUrl);

    //Use Polly to handle retrying (helps with bad connectivity) 
    var beers = await Policy
        .Handle<WebException>()
        .Or<HttpRequestException>()
        .Or<TimeoutException>()
        .WaitAndRetryAsync
        (
            retryCount: 5,
            sleepDurationProvider: retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
        ).ExecuteAsync(async () => await apiInstance.GetBeers());


    //Save beers into the cache
    Barrel.Current.Add(key: key, data: beers, expireIn: TimeSpan.FromSeconds(5));

    return beers;
}

Wrapping Up

This is a great way to implement your networking layer within your apps as it can sit within a .NET Standard library and be used in all your client apps.

If you’d like to see a more real-world example of this approach, then check out the Mobile Cloud Workshop I created with Robin-Manuel. The Xamarin.Forms app uses this approach and, it’s been working very well for us!

Big thanks to Rob for the original post and documenting such a simple solution to complex problem!

Auto Layout 101 with Xamarin

Until recently I’d done an amazing job of avoiding Auto Layouts on anything other than demo apps, instead opting to create my layouts with springs and structs. All my apps within the App Store use the old approach, which although being exceptionally easy to create, its limited when running across all the different form factors that iOS runs on.

With multitasking on the iPad requiring Auto Layouts, I thought its probably about time that I took the time to learn to love the contraversal layout engine.

What is Auto Layout?

Auto Layout is a constraints based layout system for iOS, tvOS and OS X. It allows me to create adaptive user interaces that respond appropriatly to changes in screen size and orientation.

Auto Layouts is supported in both Xamarin Studio and Visual Studio when using Xamarin.iOS but will not be applicable to Xamarin.Forms developers. Xamarin.Forms developers dont need to worry as Forms apps already work fantasticlly on iPad with multitasking!

Layout contraints

Contraints are a mathematical representation of the relationship between views. The NSLayoutContraint class is used to create contraints on both iOS and OS X but for the most part, you’ll want to create contraints using our iOS Designer rather than programatically as its much easier.

Auto Layout has a number of constraints which include size, alignment and spacing. By providing each view within a scene with constraints, Auto Layout will determine the frame of each view at runtime.

One really important tip which will help you solve Auto Layout issues is to remember: Most controls will require constraints that define its height, width, x position and y position. 

Getting Started

To get started, I want to center a UIView within my scene. I want to remain in the center of the scene no matter what size of device its running on.

Screen Shot 2015-11-25 at 15.13.13

Above you can see that I’ve added a UIView to my scene and set its background colour to blue. Although it looks like its center to my scene, no iOS device actually has this form factor. We can use the View As drop down to simulate the different size. Lets see how this would look on a 5s with no constraints.

Screen Shot 2015-11-25 at 15.13.41

We can see that my UIView isn’t position correctly! To resolve this, lets go ahead and add some contraints.

I’m going to lock its height and width. To do this, I’ll click the handle bar button.

handlebars

This has now locked the width and height of my view so that no matter what device I’m running on, the box will always remain the same dimentions. You’ll note that the lines are currently orange. This means that the control contains some errors with its contraints.

If you recall back to my pro tip, you’ll know that we’re only 50% of the way to completing the contraints for this view. We have yet to provide Auto Layouts with any information on where to position this view.

To do this, I clicked on the orange button in the middle of the view and connect it to the verticle line running through the scene. I repeat this for the horizontal line. This is telling Auto Layout that I want this views X and Y to be centered on the center of the super view.

center

When the view has 4 contraints (width, height, x and y), you’ll see the lines now turn to blue. This marks a valid set of contraints for the view and lets us know everything is valid. If you see orange lines, its Auto Layouts telling you that somethings gone wrong. Dont worry if you see orange whilst your still editing, its perfectly normal.

Screen Shot 2015-11-25 at 15.18.44

Wrapping up

This is a crazy simple demo of Auto Layouts but provides you with the basic you need to get started.

Come back in a few weeks to see more Auto Layout goodness as I convert an existing login screen to use Auto Layout.