Categories
Bluetooth

32feet.NET and Audio

There are a few different Bluetooth profiles which handle audio, but they all work in a very similar way. There are two connections open between the client (usually a phone) and the server (some kind of audio device such as a speaker or car entertainment system).

The first of these is an Rfcomm channel which handles commands between the devices. Rfcomm is essentially a serial connection emulated over Bluetooth and these commands are often a mixture of AT commands from the world of modems and other commands for associated functionality (think phone book contacts, track names etc).

The second channel is a low level SCO (Synchronous Connection-Oriented) connection which is better suited to real-time audio data. Depending on the profile this may be used for one-way (audio) or two-way (hands-free etc) audio.

32feet.NET has only support for Rfcomm out of the box. This means it is possible to establish a connection to a headset device and even do things like capture button presses and send rings but it does not support opening an audio channel. Also if you connect to a headset device or similar rather than use the platform’s built in support you’ll block the device from using its native functionality. Mobile devices have support and drivers for headset/hands-free etc and this will go through the normal audio APIs on the platform so there is rarely a need to try and interfere with this.

If you want your app to play audio over Bluetooth then pair the device with the OS and just play audio and the system will handle it for you.

Categories
Alexa Uncategorized

Alexa Skill with Azure Functions – Messaging

In the previous Alexa post I talked about building a List skill to integrate with a third-party list provider. This gives you a mechanism to react to changes in Alexa’s lists and write them to your external provider, but what about implementing a two-way sync?

When you setup account linking for your skill the user goes through an OAuth flow to authorise your app and this returns a token and a refresh token to Amazon. The Alexa infrastructure manages this securely and handles the refresh process for you. Therefore only your skill function can continue to access your third-party service. Therefore when you have a callback mechanism from your third-party provider you need a way of passing the change information into your skill to be processed. Luckily there is a messaging service to do this.

As with the list functionality there is a library to handle the messaging requests – Alexa.NET.SkillMessaging. The code you use to send the message will have to have the client id and client secret of your skill – you can find these from the Alexa Developer Console on the web.

var client = new AccessTokenClient(AccessTokenClient.ApiDomainBaseAddress);
var accessToken = await client.Send(clientId, clientSecret);

This access token can then be used to send messages to your skill. Each message consists of a payload which is a Dictionary<string,string> and a timeout. You create an SkillMessageClient and send the message to a specified user id. The Amazon user id is given to you when your skill is first enabled and the account is linked. The id is specific to the skill and cannot be used to personally identify a user.

var payload = new Dictionary<string, string> { { "Key", "Some Value" } };
var messageClient = new Alexa.NET.SkillMessageClient(alexaEndpoint + "/v1/skillmessages/users/", accessToken.Token);
var messageToSend = new Alexa.NET.SkillMessaging.Message(payload, 3600);
var messageId = await messages.Send(messageToSend, userId);

An extra complication is that there are multiple API endpoints for the SkillMessageClient depending on the region. This means you’ll have to store the endpoint along with the alexa userid so you know which to use for a specific user. If successful a unique id for the message is returned. In your skill code you have to enable support to recognise the incoming message and handle the action. In the case of a list change event from a third-party provider this would be to load the specific changed item and then write the values to the Alexa list.

As with the list support we need to register the messaging library so that the skill request can be correctly deserialised into a MessageReceivedRequest.

RequestConverter.RequestConverters.Add(new MessageReceivedRequestTypeConverter());

Then when reading your incoming request you can check the request type and add code to process the message. The MessageReceivedRequest contains a Message property with the dictionary of values sent from your other function. The user id is already included with all incoming requests in the Context.System.User.UserId property.

Combining this with the list support already discussed you can see how to use the ListManagement API to write changes into the Alexa lists.

Categories
Windows 10

WebAuthenticationBroker and GitHub

WebAuthenticationBroker is a component of Windows 10 which facilitates Oauth authentication with services from a client app. It handles the presentation and navigation of the authentication pages and returns control to your app along with a returned token or an error code. It’s a UWP API and integrates neatly with modern apps, however what may not be obvious is that it doesn’t use the Edge browser like a WebView control but instead uses the legacy Internet Explorer browser.

This is a problem because it’s old and a bit creaky and some sites don’t work with it, or actively refuse to – one of these is GitHub. If you try to authenticate with GitHub using it you’ll see a big ugly banner asking you to use a modern browser and you’ll be stuck. Hopefully this will change and the API will move to Edge (or even Chromium powered Edge which is on the horizon). However in the mean time you’ll need to roll your own solution. There are lots of ways you could do this by using the WebView control in your app and handling navigation events but I thought I’d try and recreate the same API so that I had a solution I could swap in without major changes. The result of this work is called, and excuse me it was very late when I thought of it, Authful. The full code is online in GitHub here. I haven’t released it as a NuGet library and won’t unless there is sufficient interest as I thought people might just prefer the code so that they can customise the look and feel for their own apps. There is a sample project to show how to call the API (hint: It’s the same as the UWP class). It doesn’t support the advanced options but it should work with most mainstream OAuth-based APIs.

https://github.com/inthehand/Authful

Categories
Alexa Azure

Alexa List Skills with Azure Functions

When  I was building my Microsoft To-do Alexa skill I found Matteo’s series of blog posts on Alexa + Azure very useful. However I needed to go beyond the functionality described there and knew I’d need to delve deeper into the Alexa Skills Kit documentation. The first item I thought I’d blog about is the Alexa List Management API. I’m not going to open source the skill but I thought I’d share some pointers which could be useful to other developers.

What is a List Skill?

Alexa has built in lists for To-do items and Shopping List. These are quite basic (there are no due dates or reminders for example) but a third-party can extend them to synchronise them with another back-end. When you create a list skill you aren’t providing Alexa with an API into your datastore, but rather are taking responsibility to maintain synchronisation between two copies of the list. Your list items could be updated either from Alexa via the web or app or, more commonly, via a user’s voice command. To support this your skill subscribes to a number of list events which fire whenever an item is added, modified or deleted. Likewise on your backend you’ll need to handle changes and communicate them back to your Alexa skill. I’ll discuss this messaging infrastructure in a future post.

In its simplest form a List skill doesn’t have to have any speech interface of its own. It just has to handle the standard list events and require read (and likely write) permission on the Alexa household lists.

Building a List Skill

If your skill isn’t a regular custom skill you can’t edit the manifest through the Alexa Skill Kit dashboard and so you would use the Alexa Skill Kit command line tools. However rather than suffer the misery of a command prompt you can use trusty Visual Studio Code and the official Alexa Skills Kit Toolkit extension. This allows you to edit your skill metadata in Visual Studio Code’s editor and use the command palette to perform common operations like deploying the skill. The metadata for a skill is expressed in json and the editor has intellisense for the schema. A manifest for a list skill must contain an events object containing a list of standard event types:-

"events":{
      "endpoint":{
        "uri":"https://YourAzureFunctionAppEndpoint/api/FunctionName",
        "sslCertificateType": "Wildcard"
      },
      "subscriptions":[
       {
         "eventName": "SKILL_ENABLED"
       },
       {
         "eventName": "SKILL_DISABLED"
       },
       {
         "eventName": "SKILL_PERMISSION_ACCEPTED"
       },
       {
        "eventName": "SKILL_PERMISSION_CHANGED"
       },
       {
        "eventName": "SKILL_ACCOUNT_LINKED"
       },
       {
        "eventName": "ITEMS_CREATED"
       },
       {
        "eventName": "ITEMS_UPDATED"
       },
       {
        "eventName": "ITEMS_DELETED"
       },
       {
        "eventName": "LIST_CREATED"
       },
       {
        "eventName": "LIST_UPDATED"
       },
       {
        "eventName": "LIST_DELETED"
       }
      ]
    },
Also in order to be able to query the list items and write changes you must also request permissions. You’ll need to be careful as the user can revoke these:-
    "permissions": [
      {
        "name": "alexa::household:lists:read"
      },
      {
        "name": "alexa::household:lists:write"
      }
    ]
The endpoint you defined for events will receive all the event types so you’ll need to write code to read the event type and react accordingly. Now this is where we step into the unknown – the list events require a different request type than is covered with the Alexa.NET library. The good news is there are already a range of companion NuGet packages for specific Alexa APIs and Alexa.Net.ListManagement has what we need here.
For each of these libraries we have to add a line of code at the beginning of our function to tell the main Alexa.NET library how to deserialise the request. For list management this is:-
RequestConverter.RequestConverters.Add(new ListSkillEventRequestTypeConverter());
            
Then from our deserialised SkillRequest we can get the type to determine the type of request which is a specific type depending on the list event e.g. ListSkillItemCreatedRequest. The body of this request will contain a list id which may represent the To-Do list, Shopping List or a custom list. It will also contain one or more list item ids for the newly created item(s).

Modifying List Items

The other part of the ListManagement library is the ListManagementClient. This provides access to read and write list items and wraps the REST API. The constructor takes an access token which is passed to your skill in the skill request’s Context.System.ApiAccessToken property. With this (assuming you are granted the required permissions) you can query all list metadata and create, modify and delete list items. However these are operations you’ll mainly do triggered by changes in your linked back-end so you can keep Alexa’s copy of the list in sync with your own data.
In the next post I’ll look at how to implement messaging so that your own system can send a message to your skill to update items…
Categories
Xamarin

Xamarin Forms Fast Renderers – Part 2 Android

Following on from Part 1, this post will briefly discuss the Android approach to Fast Renderers. Again there isn’t really any documentation for control builders, but there are examples within the Xamarin Forms source to work from. Xamarin Android like iOS uses an IVisualElementRenderer interface which is very similar to the iOS equivalent. The differences are down to the different platforms approaches. For example the NativeView and ViewController of iOS are represented with the View and ViewGroup properties on Android. ViewGroup can return null, but if the control uses a ViewGroup derived class for laying out controls that can be returned.

There are some additional methods such as SetLabelFor(id) and UpdateLayout(). The first one supports the accessibility system on Android and allows a descriptive label to be added to another control. The latter calls a helper class the VisualElementTracker to help update the layout.

Beyond these things the concept is very much the same and you handle the same kind of interactions with the Element which is the platform agnostic representation of the control and its properties.

Categories
iOS Xamarin

Xamarin Forms Fast Renderers – Part 1 iOS

A Xamarin Forms Renderer provides the device-specific logic to display a Xamarin Forms control using platform-native UI. Traditionally this was done using the ViewRenderer<T,T> base class. What this actually creates in the UI hierarchy is two controls – the outer being a basic place-holder providing layout logic and the inner control being the desired native control (e.g. a UITextField in the case of an Entry on iOS).

This introduces overhead into the UI and complicates the layout logic as the whole page is arranged. The concept of a fast renderer does away with the enclosing ViewRenderer and instead requires you to implement an interface with the standard behaviour required by the Xamarin Forms layout system.

When I began re-writing my MediaElement for inclusion into Xamarin Forms I needed to replace the iOS renderer with a fast renderer but there was very little documentation on building a fast renderer. I found looking through the source for other renderers helpful. The Pages and WebView all use fast renderers in the current codebase.

On iOS this interface is IVisualElementRenderer and it exposes a number of properties, an event and a few methods.

Properties:-

  • Element – returns the Xamarin Forms element which this renderer represents
  • NativeView – returns the native UIView-based control
  • NativeViewController – returns the UIViewController which manages the View

Events:-

  • ElementChanged – raised when an Element is assigned to the renderer

Methods:-

  • GetDesiredSize – returns a SizeRequest from a set of constraints. The control can alter this to fit required content for example. An extension method for UIView provides GetSizeRequest which will calculate the SizeRequest based on the constraints and optional minimum width/height.
  • SetElement – assigns the Element and causes the ElementChanged event to be raised. You’ll also hook up the PropertyChanged event here to react to changes in the Element and apply them to the NativeView.
  • SetElementSize – updates the layout to fit a specific size. Normally you call Layout.LayoutChildIntoBoundingRegion() to perform this.

A secondary interface IEffectControlProvider provides a single method to register an effect with the View.

By looking at the existing in-box renderers I was able to understand how they are implemented and re-write the MediaElementRenderer to use this pattern. For reference the full code of the iOS renderer is here in GitHub.

Not all the renderers in iOS use the new approach, I imagine it will be some time before all the existing renderers are re-written. Currently UWP and other platforms still use the traditional approach. I’ll follow up with the Android equivalent soon.

Categories
Alexa

Talking About Tasks

Back in 2010 Microsoft released Windows Phone 7. It was a huge change from the Pocket PC/Windows Mobile OS which had preceded it and while it brought a modern UI and app-store infrastructure it missed a number of pieces of core functionality from the older phones. One of these was support for Tasks. I set about writing an app which became “Tasks In The Hand” and became very popular in those early days. Even when Microsoft later added basic Tasks functionality in Windows Phone 7.5 the app still had a healthy following because it supported views and features absent from the in-box app.

Skipping forward to today Microsoft’s Task story is rather different. After purchasing Wunderlist they began writing a new app called Microsoft To-Do which is available across multiple platforms – iOS and Android for mobile and Windows for desktop. Crucially though, under the hood, it’s still based on Office 365 (or Outlook.com for personal Microsoft IDs) for storage and so works just as well with the traditional Tasks view in Outlook on the desktop.

Back in 2010 we did not have voice assistants but now we have Alexa, Google Home and Cortana. If you get used to using Microsoft To-Do everywhere, as I have, you miss having integration with a voice assistant and so that is where I decided Tasks In The Hand needed to go next. Today my Alexa Skill was released into the store for anyone to connect with their Echo or similar device.

Tasks In The Hand in the Alexa Skills Store
Tasks In The Hand in the Alexa Skills Store

Tasks In The Hand in the Alexa Store

The skill links with your Microsoft ID, which is either associated with an Office 365 account or an Outlook.com personal account. Once setup you can add tasks to your Alexa To-Do list or items to your Amazon Shopping List and they’ll synchronise with your Microsoft account. You can modify, complete or delete these items via Microsoft To-Do and those changes are synched back to Alexa. After you’ve linked accounts any items you add to your default Tasks folder or Amazon Shopping List folder will also be synchronised with Alexa.

The Skill was built using Azure Functions and I found Matteo Pagani‘s series of Blog posts very useful with getting started working with Alexa Skill Kit. It uses Tim Heuer‘s excellent Alexa.NET package to handle the interactions with Amazon.

The skill is completely free, I hope you find it as useful as I have and please get in touch if you have any feedback.

Categories
Xamarin

Capture Android Screen Video from Visual Studio

While debugging your Xamarin Android app in Visual Studio you can capture a video of the device screen and upload it to your PC. To do this open the ADB command prompt from the Xamarin Android toolbar:-

adb command prompt

At the command prompt navigate to a folder where you want the video to end up. Type the following command to start recording:-

adb shell screenrecord /sdcard/filename.mp4

The path you supply must be a valid path on the device with enough space to store the video. The video can be up to 3 minutes long. To stop recording press Ctrl-C at the console window. Then to upload the video type:-

adb pull /sdcard/filename.mp4

Where obviously the path must match whatever you used in the first command. Once it shows that the pull was successful you can open the file and do whatever you need to do with it. You can do basic trimming using the Windows Photos app, and there are plenty of other options for more complex editing…

Categories
iOS Xamarin

Xamarin iOS App Settings

When we talk about app settings we could mean a few different things. The actual settings values which are stored by your app, an in-app method to view/edit those settings or the iOS Settings UI which exposes both system and application settings to the user in the centralised place.

In this article we’ll look at the last of these. If you are creating a cross platform app you’ll probably never look into this as the main place to store your settings because you’ll want a consistent UI within your app to expose settings. The iOS Settings app will always have an entry for your app anyway as this exposes permissions which your app has requested and allows the user to change their preferences. You may have also noticed that when debugging your Xamarin app you’ll see entries added here for the Xamarin debugger.

In my previous blog post I wrote about visualising screen taps in your app in order to capture video walkthroughs. I decided I didn’t want the option for this exposed via the apps own Settings page, and I wanted to be able to set it before even launching the app, so I looked into adding it to this menu. As it turns out it is actually quite easy. I’m going to assume you use the NSUserDefaults mechanism for storing the underlying setting value. You might use this directly or via a cross-platform API such as Pontoon to abstract this away behind a common API.

First you need to create a folder within your Xamarin iOS project called “Settings.bundle”. Within this, you add an XML file called “Root.plist” and set the Build Action to “BundleResource”. This XML file is in Apple’s horrible “Property List” XML format and contains a dictionary with a single entry with the key “PreferenceSpecifiers”. The value if this is an array of dictionaries, each of which represents an individual preference item. Here is my example adding a single Boolean toggle:-

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
   <dict>
      <key>PreferenceSpecifiers</key>
      <array>
         <dict>
            <key>Type</key>
            <string>PSToggleSwitchSpecifier</string>
            <key>Title</key>
            <string>Show touches</string>
            <key>Key</key>
            <string>ShowTouches</string>
            <key>DefaultValue</key>
            <false/>
         </dict>
      </array>
   </dict>
</plist>

This give you the simple output shown below. The section header with your app name is added for you. Here you can see each entry defines the setting type (which affects other options available), the display text, the key (which is the name of the setting in code), and the default value (of the correct type for your setting).

Visible Touch App Settings

There are other type specifiers for other settings types such as text or selections from multiple fixed values. You can read more details here.

While this might seem a bit disconnected from your app you can open this settings page programmatically. Useful not just for showing the user these custom settings but also if you want to assist the user in turning an app permission back on. There is a hard-coded Uri to launch your app specific settings page which you can call using:-

UIApplication.SharedApplication.OpenUrl(new NSUrl(UIApplication.OpenSettingsUrlString));

Some 3rd-party apps make extensive use of this to display app settings and properties. Take a look at Microsoft Word which uses this mechanism to display product version and licensing information as well as app settings. How much you intend to integrate with it is entirely up to you, but it can help your app feel more integrated when it offers these kinds of hooks into the OS.

Categories
Xamarin

Touchscreen Visualisation on Xamarin iOS

Seems to Have an Invisible Touch

To produce a demonstration video on Android you can turn on screen visualisations from the Developer settings menu and this will shows screen interactions on a screen capture. No such option exists on iOS so you need to write some custom code in your app instead. A couple of solutions exist for native iOS app but to use these from Xamarin would require additional code to wrap. The alternative is to write the code from C# to achieve the same result.

Wrap or Write

My requirements were quite simple, for a first version I only needed to show single touch events and I wanted something which looked visually similar to the Android equivalent. It turns out that the solution is to inherit from UIWindow and add some extra code to capture touch events and draw to the screen. This means a single solution works anywhere in your app as a single Window is used regardless of what views you have. This should work in Xamarin Forms on iOS too by adding similar code to the FinishedLaunching method in your AppDelegate.

When you create a blank Xamarin iOS app it will generate a Main.storyboard for the UI and this is hooked up in the Info.plist manifest so there will be no code in FinishedLaunching to setup the Window and root ViewController. The first step therefore is to un-set this:-

visible-touch-info

Then you can add code in the FinishedLaunching method to create the Window and load the storyboard.

public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions)
{
	// Set a custom Window which handles touch events
	Window = new InTheHand.TouchWindow(UIScreen.MainScreen.Bounds);

	// Load root view controller from Storyboard into Window.
	// You can't set this from Info.plist as it'll use a regular UIWindow.
	Window.RootViewController = UIStoryboard.FromName("Main", null).InstantiateInitialViewController();
	Window.MakeKeyAndVisible();

	return true;
}

Some solutions to this use graphics to display the touch circle but I’ve gone for straightforward UIKit drawing code with a circular UIBezierPath. The full code for a working demo app is on GitHub here:-

https://github.com/inthehand/VisibleTouch

You can see the effect of this code in this video:-

There are some limitations with this code. It supports single touch gestures only, if you use multiple fingers you’ll get a circle jumping around to the latest event. The circle can disappear quite abruptly, it will probably look better with a simple animation to fade out. I haven’t included code here to turn the visualisations on or off. Chances are you won’t want them on all the time so it would make sense to add a setting somewhere to toggle this. I did this myself using a settings bundle so you could toggle the setting before even running the app. That was interesting in itself and easier to integrate with Xamarin iOS than I expected but that is a topic for another blog post.

If there is significant interest in this I can release this as a NuGet package for easy integration into your Xamarin iOS project. For now you can take the TouchWindow.cs code and drop it into a project and modify the AppDelegate as described above.