Continuous delivery of macOS apps built with Swift

Anyone familiar with my ramblings will be aware that I mostly develop in C# using a mixture of Xamarin and .NET Core depending on what I’m building. Earlier this year I took the decision that I’d be serious about learning Swift and got started with building a simple utility app for macOS to help me find training images for some machine learning.

Screen Shot 2018-05-10 at 11.11.55

The app has mostly just sat on Github with little love since I originally published it, so this week I dusted it off (I actually just cloned it again from Github but I like the metaphor) and started implementing a few of the features that didn’t make it to the first release. The most obvious is file tagging, which makes it possible to add file system tags to exported images, for easier discovery of exported images.

Screenshot 2018-09-28 at 10.25.14

I shared a gif of the new build with a colleague, and he loved it so much that he wanted a copy. Now I could have easily have behaved like an animal and built a version on my development machine and sent over the results, but instead, I opted to listen to the sane part of my brain that was calling for me to set up a CI/CD pipeline.

Enter Microsoft’s App Center

If you’re not familiar with App Center then you’re in for a treat! App Center provides a one-stop-shop for services that app developers will likely need. This includes building, testing, distribution, analytics and crash reporting to name a few. Today I’m going to focus on the build aspect, but I’ll cover other features in upcoming posts.

Microsoft has been working hard on adding new features to App Center, and one of those new features in the preview is the ability to build Swift macOS apps. The setup process only requires a few clicks, and we’re up and running. Below a gif of the process recorded in real-time which shows how quickly I managed to get a build setup and running.

Build.gif

App Center Build Setup

To get started we have to create a new app within App Center and specify a name OS and platform as a minimum. In my case, I only really need to worry about selecting macOS as App Center currently only supports Objective-C and Swift as languages for macOS app development. Screenshot 2018-09-28 at 10.33.06.png

Setting up the build pipeline

Once we’ve clicked “Add New App”, we’ll be presented with a screen encouraging us to integrate App Center SDKs into your app. I’ll cover the advantages of this in another post as it’s not needed to use App Center. Did I mention that every feature in App Center is optional? In the post, we’re only going to use the build and distribute functionality and ignore everything else.

Screenshot 2018-09-28 at 10.40.21

Build Configuration

As mentioned earlier in the post, the code is hosted on Github which is integrated with App Center. This allows me to connect App Center to the repository and anytime I push to a branch I can have App Center automatically trigger a build.

Screenshot 2018-09-28 at 10.43.40.png

Once I’ve selected Github I’m presented with a list of all my repositories for me to select which one I wish to link to my App Center App.

Screenshot 2018-09-28 at 10.45.12.png

In this example, the repository only has one branch so I’ll select that puppy and move onto configuration.

Screenshot 2018-09-28 at 10.46.56.png

Screenshot 2018-09-28 at 10.50.24.png

Build Configuration

We want to do a few things with the build configuration. Number one, it has to sign the build for distribution using my Apple Certificates and secondly I want to increment the version number of the app automatically.

Screenshot 2018-09-28 at 10.51.15.png

Signing builds

In order to sign builds for distribution, we’ll need to upload a copy of our .p12 file and a valid provisioning profile.

Screenshot 2018-09-28 at 10.52.31.png

Incrementing build numbers

App Center has native understanding of our projects info.plist file (thanks to the work they did on supporting iOS) so incrementing the build number only requires a few button clicks to configure.

Screen Recording 2018-09-28 at 10.56 am.gif

Distribution

We’re almost finished configuring the build process but we’ve one last step to configure and that’s distribution!

By default, the distribution list is a little lonely as it’ll just be you, but as you find people excited to try your apps you can add them to lists and control what versions of the app they get. For example, you might want your VIPs to get GM access and staff to have access to betas.

Screenshot 2018-09-28 at 10.58.42.png

Adding distribution groups

To setup my VIPs distribution list I head over to the “distribution” beacon on the left hand menu and click “Add Group”.

Screenshot 2018-09-28 at 11.08.00.png

Right now I’ve only one VIP and that’s my colleague Dean but this is enough to demonstrate the functionality. It’s worth noting that I need to pop back to the build configuration to update the distribution to VIPs if I want Dean to get a copy of the builds triggered from Master.

Screenshot 2018-09-28 at 11.13.45.png

Distribution email

And with only a few clicks, my users will now get a nice email with a link to install the latest and greatest builds of my app!

Screenshot 2018-09-28 at 11.17.45.png

Conclusion

App Center is a powerful tool for app developers to streamline their development processes from building, distribution to monitoring after release. I hope this post has helped you understand how easy it can be to set up a CI/CD pipeline for macOS apps developed with Swift 4.0. If you’ve any questions or feedback then please don’t hesitate to reach out.

Xamarin with macOS 10.14 (mojave)

It’s that time of year again where we all ask ourselves “should I install this beta software on my devices and risk my development setup?”. If you’ve only one iPhone and Mac then it can be difficult to decide when it’s the right time to install the latest and greatest offerings from Apple, for fear of breaking your development environment.

This year I’ve not needed to be so worried about breaking my environment as my current projects see me developing more with ASP.NET and Swift than Xamarin, so I went ahead and downloaded both iOS 12 and macOS 10.14 as soon as I could.

Screenshot 2018-06-05 at 00.03.18

How does it run?
First impressions of running macOS Mojave and Xamarin are better than expected! I fired up Visual Studio for Mac and opened the workshop myself and Robin-Manuel Thiel created to see if I could get the iOS app to build using Xcode 9. The good news is that it works without any modification assuming you had a working setup before upgrading.

With that said, I have experienced some crashes with resizing VS4Mac but issues are expected at this stage.

Screenshot 2018-06-05 at 00.26.15

Advice
If you can avoid it, don’t update just yet if your day-to-day development requires the Xamarin tooling to work. The Xamarin engineers will need some time to test and ensure things work properly as they too have only just downloaded a copy of the OS.

On the other hand, if you simply cannot wait to play with macOS Mojave,  then know that at a minimum, you can continue to build and deploy with the latest beta software from our friends in Cupertino.


– Opinions are my own and not the views of my employer

Consuming Microsoft Cognitive Services with Swift 4

This post is a direct result of a conversation with a colleague in a taxi in Madrid. We were driving to Santiago Bernabéu (the Real Madrid Stadium) to demonstrate to business leaders the power of artificial intelligence.

The conversation was around the ease of use of Cognitive Services for what we call “native native” developers. We refer to those that use Objective-C, Swift or Java as ‘native native’ as frameworks like ReactNative and Xamarin are also native, but we consider these “XPlat Native”. He argued that the lack of Swift SDKs prevented the adoption of our AI services such as our Vision APIs.

I maintained that all Cognitive Service APIs are well documented, and we provide an easy to consume suit of REST APIs, which any Swift developer worth their salt should be able to use with minimal effort.

Putting money where my mouth is

Having made such a statement, it made sense for me to test if my assertion was correct by building a sample app that integrates with Cognitive Services using Swift.

Introducing Bing Image Downloader. A fully native macOS app for downloading images from Bing, developed using Swift 4.

Screen Shot 2018-05-10 at 11.11.55.png

I’ve put the code on Github for you to download and play with if you’re interested in using Cognitive Services within your Swift apps, but I’ll also explain below how I went about building the app.

Where the magic happens

In the interest of good development practices, I started by creating a Protocol (C# developers should think of these as Interfaces) to define what functions the ImageSearch class will implement.

Protocol

protocol ImageServiceProtocol {
// We will take the results and add them to hard-coded singleton class called AppData. 
func searchForImageTerm(searchTerm : String)

// We pass in a completion handler for processing the results of this func
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ())
}

Two Implementations for one problem

I’ve made sure to include two implementations to give you options on how you’d want to interact with Cognitive Services. The approach used in the App makes use of the Singleton class for storing AppData as well as using Alamofire for handling network requests. We’ll look at this approach first.

search For Image Term

This is the public func, which is easiest to consume.

func searchForImageTerm(searchTerm : String) {

    //Search for images and add each result to AppData
    DispatchQueue.global.(qos: .background).async {
        let totalPics = 100
        let picsPerPage = 50 
        let numPages = totalPics / picsPerPage 
        (0 ..< numPages)             
            .compactMap { self.createUrlRequest(searchTerm: searchTerm, pageOffset: $0 }             
            .foreach{ self.fetchRequest(request: $0 as NSURLRequest) }         
        .RunLoop.current.run()     } 
} 

create Url Request

private func createUrlRequest(searchTerm : String, pageOffset : Int) -> URLRequest {

    let encodedQuery = searchTerm.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)!
    let endPointUrl = "https://api.cognitive.microsoft.com/bing/v7.0/images/search"

    let mkt = "en-us"
    let imageType = "photo"
    let size = "medium" 

    // We should move these variables to app settings
    let imageCount = 100
    let pageCount = 2
    let picsPerPage = totalPics / picsPerPage 

    let url = URL(string: "\(endPointUrl)?q=\(encodedQuery)&count=\(picsPerPage)&offset=\(pageOffset * picsPerPage)&mkt=\(mkt)&imageType=\(imageType)&size=\(size)")!
        
    var request = URLRequest(url: url)
    request.setValue(apiKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
        
    return request
}

fetch Request

This is where we attempt to fetch and parse the response from Bing. If we detect an error, we log it (I’m using SwiftBeaver for logging).

If the response contains data we can decode, we’ll loop through and add each result to our AppData singleton instance.

private func fetchRequest(request : NSURLRequest){
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request as URLRequest){ (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        self.log.error("Fetch Request returned no data : \(request.url?.absoluteString)")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
        (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        let imagesToAdd = bingImageSearchResults.images.filter { $0.encodingFormat != EncodingFormat.unknown }
            AppData.shared.addImages(imagesToAdd)            
        } catch {
            self.log.error("Error decoding ImageResultWrapper : \(error)")
            self.log.debug("Corrupted Base64 Data: \(data.base64EncodedString())")
        }     
     }
        
     //Tasks are created in a paused state. We want to resume to start the fetch.
     task.resume()
}   

Option two (with no 3rd party dependancies)

As a .NET developer, the next approach threw me for a while and took a little bit of reading about Closures to fully grasp. With this approach, I wanted to return an Array of ImageSearchResult type, but this proved not to be the best approach. Instead, I would need to pass in a function that can handle the array of results instead.

// Search for images with a completion handler for processing the result array
func searchForImageTerm(searchTerm : String, completion : @escaping ([ImageSearchResult]) -> ()) {
        
    //Because Cognitive Services requires a subscription key, we need to create a URLRequest to pass into the dataTask method of a URLSession instance..
    let request = createUrlRequest(searchTerm: searchTerm, pageOffset: 0)
       
    //This task is responsbile for downloading a page of results
    let task = URLSession.shared.dataTask(with: request, completionHandler: { (data, response, error) -> Void in
            
    //We didn't recieve a response
    guard let data = data, error == nil, response != nil else {
        print("something is wrong with the fetch")
        return
    }
            
    //Check the response code
    guard let httpResponse = response as? HTTPURLResponse,
    (200...299).contains(httpResponse.statusCode) else {
        self.handleServerError(response : response!)
        completion([ImageSearchResult]())
        return
    }
            
    //Convert data to concrete type
    do
    {
        let decoder = JSONDecoder()
        let bingImageSearchResults = try decoder.decode(ImageResultWrapper.self, from: data)
                
        //We use a closure to pass back our results.
        completion(bingImageSearchResults.images)
                
    } catch { self.log.error("Decoding ImageResultWrapper \(error)") }
    })
    task.resume()
}

Wrapping Up

You can find the full project on my Github page which contains everything you need to build your own copy of this app (maybe for iOS rather than macOS?).

If you have any questions, then please don’t hesitate to comment or email me!

 

How to fix the IPv4 loopback interface: port already in use error.

Super quick post here. Sometimes when debugging your .NET Core application on Mac, you’ll find the port won’t free up, and thus you can’t redeploy without getting the following fatal error:

Unable to start Kestrel. System.IO.IOException: Failed to bind to address http://localhost:5000 on the IPv4 loopback interface: port already in use.

To fix this, you’ll need to fire up Terminal and enter the following:

sudo lsof -i :5000

In my case, this outputted the following:

Screen Shot 2017-10-20 at 18.54.54.png

I know the error is referencing the IPv4 Type which allows me to quickly find the PID number, which I’ll use to kill the connection. I do this with the following command

kill -9 18057

With that done, I can now get back to debugging my .NET Core web API on macOS.