Jul 15

Restroom Monitor Mark II

Have you ever found yourself in need of answering nature’s secondary call, walking across the office to heed it, only to find that all the stalls are in use? <sarcasm>Being situated at the far end of the office, this was a very serious issue</sarcasm> – or at least I could pretend it was to give myself enough of an excuse to do something about it. If only there was a way to know, before ever leaving your desk, if your call would be able to be answered in peace or not.

The concept is simple – determine the state of a restroom and put it somewhere that it can be checked before heading over. A similar system was installed many years ago using wireless, battery-powered magnetic switches updating a website – but the very obvious boxes were vandalized and required frequent maintenance. My objective was to make something completely invisible (and that wouldn’t make people uncomfortable – as restroom tracking could easily do) and impervious to all but the most malicious sabotage – while providing a seamless way to check the state.

The restrooms in question are private rooms (not just stalls), with their own door and deadbolt – the door is always closed, and the occupancy is determined by the position of the deadbolt, which also has a red/green flag on the front of the door. One of the main concerns about this project was doing it in such a way that it wouldn’t make anyone uncomfortable – which would rapidly kill the project – invasion of privacy lawsuits can be expensive. A wide variety of methods of determining state were considered and discarded:

  • Motion sensor – too much like a camera and bad for long visits, tough to get a definitive state
  • Infrared sensor – too much like a camera and visible
  • Red/green color sensor looking at flag – too much like a camera
  • Magnetic switch or door hinge rotation sensor – can’t tell if the door is locked or not
  • Deadbolt induction sensor – too fragile
  • Switch connected to a Bluetooth dongle to communicate – sitting inside a metal door frame could have connectivity issues and the battery would have to be replaced

I eventually settled on a deadbolt switch designed specifically for commercial installation, wired through the door frame to above the dropped ceiling (isn’t drilling holes in the office walls fun?). The switch sits inside the deadbolt pocket (so is not visible), is designed for industrial usage (so won’t break with repeated use), and is wired so that connectivity is perfect and there are no batteries (so requires no maintenance). The switch I used was this deadbolt pocket switch.


Once you have a way to determine the state of the restroom, the next step is to be able to read the state and send it somewhere. After working with a couple different microcontrollers, I decided to use the Spark Core because of the on-board WiFi and extremely easy development/deployment process. After working with it, I could not recommend it highly enough. After using the phone application to connect it to wifi and tie it to your account, you update the microcontroller by coding the application on their web IDE, then pushing the automatically verified and compiled code to the device over the public internet. It’s one step short of pure magic – and a drastic and welcome change from the microcontrollers I’ve worked with in the past. All that aside, it’s a simple task to have the Core receive input from the switch when it changes, then POST notifications when it receives a changed state from the switch over WiFi to a listening service endpoint. The microcontroller is wired into power from a standard mains to USB power supply – again, removing any dependency on battery maintenance.

I did experiment with using USB battery packs to see what kind of battery life I could get – and ran into an interesting behavior.  The battery packs that automatically turn on do so by monitoring the current dropped across the power pins.  They also automatically turn off when too low a draw is detected – assuming that nothing is actually using the power.  To conserve power draw, I disabled the WiFi on the microcontroller when not actively transmitting a changed switch state.  While this did save energy, it also made the power draw low enough (<10 mA) that the battery pack automatically turned itself off thinking that nothing was plugged in.  To get around this, I wired up a transistor circuit to put a 50 millisecond draw across the power pins (through a resistor) every 7 seconds (suggested by this article).  This was effective in keeping the device on – but the biggest battery pack I could find (20,000 mAh) only lasted about a week.  In the interest of a platform requiring zero maintenance, I instead decided to hook up a USB wall wart power supply and wire that in rather than relying on battery power.


The two grey wires go through the wall, down the frame, and to the two door switches (via Molex connectors for easier maintenance), the lower black cable is power from a USB wall wart, and the upper black cable goes to the indicator lights – more on that later)

On the other side of the POST request, I have a Windows Service running on a server, which is self-hosting both an HTTP endpoint for the microcontroller to call with switch state changes, as well as a Skype for Business platform and user endpoints representing each restroom. Since everyone at Clarity is on our internal IM client all day (Lync/Skype for Business), it’s logical that we’d look there for the state of the restrooms. Since Skype for Business endpoints already have a presence state associated with them that shows red/green, it’s an absolutely perfect fit to have endpoints for each restroom that can be Available or Busy, according to the status of the switch on the physical room. Welcome to the internet of things (or places)!


So that’s all nice and dandy!  Indicators on our computers getting the state of the restroom – that seems good enough.  Yea, I wasn’t happy with “good enough” either – it just wasn’t quite over-the-top enough yet.  Clearly, more was needed.

I printed 3D models of toilets (can I just say how much I love the previous 6 words?) on the office 3D printer (MakerBot 2) using clear plastic filament, and embedded LED lights into the back of them (hot glue to the rescue). Since the microcontroller already knows the state of the switches and is able to put out a very convenient 5 volt current that can drive the LEDs, it was a simple task to wire up the translucent toilets to lit LEDs indicating their respective state – functioning as remote physical indicators for the rooms that could be glanced at before heading down the hallway to the doors themselves.  As a side note, I used an Ethernet cable to go from the microcontroller to the RGB LEDs – 3 power sinks per light plus one shared voltage source needed 7 wires to run to the models, and Ethernet cables are a very convenient 8 strands, and are easily available in an office.  I wired up female connectors from Ethernet ‘extension’ cable to both ends, so that the light is easy to disconnect and can use standard Ethernet cables of whatever length is needed to run the distance without having to re-solder the pins.  In the picture above, it’s the upper black wire that I said I would mention later.


Next, since the Skype for Business endpoints that were showing the presence for the restrooms already support IM very easily (and were coming in to an application that I controlled), why not allow the restroom endpoints to have conversations? I added the ability for the endpoints to respond to inquiries about usage for the day with some basic statistics (from the aforementioned state change data), suggest places for lunch (randomized, suggesting 3 different cuisines from a database of over 50 places in the immediate vicinity), and tell jokes (all of them awful – from a database collection of several thousand).

Some people seem to have a thing about using a restroom when the seat still carries warmth from the last occupant. To accommodate those folks, I added in a ‘cool-down’ period, based on how long it eas occupied. Y’know, because it was absolutely necessary.


Finally, since data was getting sent to the service anyway with each switch change, I set up a database that the historical changes could be written to. This way, we can compile all sorts of utterly useless statistics about restroom usage, preference between the two, peak times of day, etc. What better information is there to offer at quarterly meetings?


And with that, the Pooper Snooper Mark II (er… Restroom Monitor) was born. 

0 comments , permalink


Jul 15

Uploading Files Asynchronously in Internet Explorer 8-10 with Server Responses

I recently had to debug an issue on a client project related to Internet Explorer versions 8-10 not correctly handling errors returned from a server during an asynchronous file upload.  I solved it, but the answer doesn’t appear to be anywhere on the Internet; hence this post.

Anyway, as background, if you wanted to upload a file asynchronously (i.e. without a full page POST-back) from a browser until the advent of XML HTTP Request v2, your best bet was to embed a hidden IFrame on your page and POST-back through it.  This actually works pretty well and is well understood technology.  See here for an example:

What isn’t well understood is how to communicate back to the browser what happened on the back end.  For example, the file uploaded could violate a size limit or other items POSTed with the form might not be kosher.  You want to make sure the user knows this, but you’re using an IFrame.  So you need to read the content of that response in the IFrame using the load event on the IFrame.  You could simply return an error via a non-success response like a 400 or 500 error and read that from the IFrame in JavaScript for processing.  On modern browsers, that works fine.  However, on older browsers like Internet Explorer versions 8-10, a non-success response essentially locks the IFrame from the parent frame.  You’re basically bombarded with “Access is Denied” messages when trying to access any page content from outside of the IFrame via JavaScript in this case.

How to proceed?  With a Hack (unfortunately).

In this instance, your best bet is to leverage a JSON response for both success and failure because of the small payload that’s transmitted under the “text/html” content type.  You need to use the text/html content type so Internet Explorer doesn’t prompt the user to download the file, which is decidedly not what you want.  Below is an example of this.

public JsonResult AddOrderDocument(DocumentFormViewModel viewModel)
    if (!ModelState.IsValid)
        return Json(new ModelStateException(ModelState), "text/html");
        var orderDocument = _orderService.AddOrderDocument(viewModel.OrderId, viewModel.File.FileName, viewModel.File.InputStream, viewModel.Title);

        // setting the return type to "text/html" is a hack that is needed to prevent some versions of IE from prompting the user to download the json
        // since it is returned into an iframe. Some versions of IE11 were behaving this way, though others were okay. Leaving in for safety sake.
        // related post:
        return Json(new OrderDocumentViewModel
            OrderDocumentId = orderDocument.Id,
            DocumentId = orderDocument.DocumentId,
            FileName = orderDocument.Document.Name,
            Title = orderDocument.Title
    catch (ValidationException validationEx)
        return Json(new ModelStateException(ModelState), "text/html");
    catch (Exception ex)
        return Json(ex, "text/html");

With the 200 Success response (even for server-side errors), the content of the JSON is pushed into the IFrame and can then be read from the IFrame like normal.  Then it’s up to your JavaScript to read the content of the IFrame and act accordingly, like the example below:

function submitOrderDocumentForm() {
    var callback = function () {
        var response = $('#documentIFrame').contents().text();
        var jsonResponse = tryParseJSON(response);

        if (jsonResponse) {
            if (!jsonResponse.Message) { //Symbolizes an Exception from C#
                //Do something here
            else {
        else {

        $('#documentIFrame').unbind('load', callback);

    $documentForm.attr('target', 'documentIFrame');
    $('#documentIFrame').bind('load', callback);

0 comments , permalink


Jul 15

Renaming an email attachment on K2 workflow

In my previous blog, I explained how to send email attachments from K2 workflow.

Sending SSRS reports as attachments in K2 workflow

However, you may have noticed that the attachments come with a weird naming format, for ex:

<file><name>MyRequest062a34ce-a8d4-4742-8ba2-fde577a7c297.pdf</name><content>{64basedstring content here}</content></file>

I think this is because of the way SSRS (or K2) try to make the name unique. This is fine and dandy but it doesn’t look great for the person receiving the email. How do we fix this?

Luckily, I figured out a way, or actually 2 ways to do this:

Option 1: Editing code

On the Mail Event, right click and Select View Code -> EventItem.


This will open the xoml file as shown below:


Right click on “Add Attachment” and Select View Code. Go to ProcessAttachments method and edit the file name as shown below:

This option lets you define the name based on a Process Instance variable:


Option 2: Using Inline Function “Create File from Content”

In this option, when you are on the Attachments wizard screen in the Mail Event wizard, instead of adding reference to Smartobject directly, add an Inline function “Create File From Content”.


This will take you to the File Content wizard:


Specify the file name of your choice for the first parameter.

For the Content parameter, navigate to the report and when you have to select the Return Type, choose ReportFile.Content (this is very important else the user will get a corrupt attachment).

I have had mixed results with Option 2. You may have to play around with Data Conversions inline functions if it does not work at the first attempt but the K2 documentation says that the content can be 64 based string, etc.

6 comments , permalink


Jul 15

Accessing SSRS reports via Smart Objects

To communicate with SSRS reports from K2 workflow, you need to have a smart object defined. This is similar to pretty much everything that you access via workflow (for example communicating with database etc). In this blog, I will show you the steps to connect to SSRS (I am assuming you already have SSRS installed on your servers and that K2 is installed with the right plugin-ins to connect to SSRS).

First create a new Smart Object and click on “Add” in the SmartObject section:

In the “Add Service Object Method” wizard screen, click on Browse to open the Context Browser.

Go to Service Object Server(s) -> Service Object Server. Navigate to your Reporting Service node and pick the report type as shown below:


After binding the appropriate input and output parameter, note that the return type is of type “File” (more about this in the later blog)


Click on Finish, deploy the SmartObject and bingo!, you are now ready to access your PDF SmartObject via K2 workflow

10 comments , permalink


Jul 15

Sending SSRS reports as attachments in K2 workflow

I recently ran into a requirement where I had to connect to SSRS and retrieve reports and send them as attachments via K2 Mail event. I looked up online everywhere but I could not find any helpful resources. Hopefully this series of blogs will be useful to someone who might run into something similar:

The first blog discusses about how to setup a Smartobject to access SSRS reports.

Accessing SSRS reports via Smart Objects

The next blog discusses about how to read the Smart Object and send as Attachment:

Create an Activity and add a Mail Event for the activity. On the Attachments Wizard screen, click on Add and click on ellipsis icon to browse to the SmartObject that is configured to read the SSRS report (Clicking on Browse button will access local folders).

This will open the Context Browser window. Go to the Environment tab and navigate to SmartObject server and select the report as highlighted below (click on the report name and not on any child nodes).


Specify any input parameter for the report. On the next screen, you will be asked to select the return type as shown below:


ReportFile.Context is a base64 string format of the document content.

ReportFile.FileName is the name of the file.

ReportFile.XML is an XML wrapped object of the above for ex:

<file><name>MyRequest062a34ce-a8d4-4742-8ba2-fde577a7c297.pdf</name><content>{64basedstring content here}</content></file>

Select the XML format option as the return type.

And you are good to go! You have now configured K2 mail event to send attachments obtained from SSRS reports.

Read my other blog if you want to find a way to change the attachment name.

Renaming File Attachment

1 comment , permalink


Jun 15

Convert Text to UPPER CASE and save to database in List View Smartforms

My goal was to take a user’s input into an editable list view and convert it into upper case. Sounds simple enough, isn’t it? I tried this approach but it didn’t work

Not sure if it was because I was using a later version of K2 Blackpearl (4.6.9) or maybe I was missing something but the article above did guide me in the right direction.

Here are the steps I took in order to accomplish this:

  • Suppose you have a grid with First Name column and you would like to save the value in UPPER CASE.


In the above example, the corresponding control for the First Name column in the Add/Edit Item row is called “First Name Text Box”.

  • Click on the column grid that you want to be displayed as UPPER CASE.
  • Go to Properties and go to Expression property.
  • Create a new expression as shown below:


Note that the input parameter for the ToUpper() is the Add/Edit Row control (and not the control from under the Display Row(s) section.

  • Once set, when the user types in a value in the First Name, when the control loses focus the value is converted to UPPER CASE.

Here is how it looks like when the control is in focus:


And when the control loses focus:


Hope this helps!

1 comment , permalink


Apr 15

Raspberry Pi 2 w/ camera module time lapse video of pepper and tomato sprouts


As I mentioned in my last post, I’m working on a project with the Raspberry Pi 2 and one of the things I’m doing is playing around with the camera module.

This little camera is not bad (similar to a cell phone camera), but it definitely does best at a bit of a distance. Probably 6-10 feet at least. I need to be a bit closer to get enough detail and also due to the limited spacing between the grow lights and the seed trays. I ended up picking up one of those cheap little sets of lenses you can get for cell phones. It’s not going to win fine photography awards, but it’s just fine for my needs. The kit includes a fisheye, wide angle, macro and telephoto lens. Here’s a closeup of one of the lenses in place.

For my initial run of photos I’m using the wide angle, but I’m hoping to experiment with the macro lens on a single plant as well. This macro lens has to be REALLY close to get clear shots, so I’ll have to experiment a bit.

My camera mount is a very primitive holder I threw together out of scrap wood, but it does the job. I’m still tweaking it as I go. It’s about as DIY as it gets.

Here is a wider shot of the seed starting area with the camera mount in place. I have it taking photos every 30 minutes of one of my pepper and tomato seed starting trays.

Here is an initial time lapse video showing some of the seeds sprouting and growing. This was taken over the course of 5 days, March 26-30. I do change the camera position and seed tray position slightly, so it’s a bit jerky in spots. I’m also still figuring out the best settings to use when turning the photos into a video and the best shot frequency to use, so may have some better examples later.

2 comments , permalink


Mar 15

Initial thoughts on Raspberry Pi 2

When the Raspberry Pi first came out a few years back, it seemed like a very interesting idea in theory. A tiny computer for $35, completely self-contained, with built-in Ethernet, HDMI and a couple of USB ports. It peaked my interest briefly, but I never got around to trying it out.

Fast forward to 2015 and there’s a new model with a quad-core processor and more memory, which translates into better/faster video options and a lot more power in general. There are plenty of articles discussing all the ins and outs of the new model, but a couple of things made me take a look this time.

One, Microsoft has promised a version of Windows 10 (out in preview right now) that will run on the unit. This opens up all kinds of possibilities for someone who is already intimately familiar with the Windows development eco-system. I do love working with Linux, but the first part of this sentence is a lie. Guess I just lost any geek cred I was building up. I’ve dabbled in Linux on and off over the years and I think the biggest issue is that I’ve never spent enough time in it to get comfortable. So everything I want to do involves a trip to Google.

Two, my company Clarity is sponsoring a concept called Ship Days this year where each employee is expected to “ship” some little side project during the year. It’s pretty wide open, but could be a mobile app, an Internet of Things project or something you might see at a MakerFaire event. Suffice it to say I won’t be the only one taking a fresh look at the Raspberry Pi platform.

I’ve had the Raspberry Pi 2 for a couple weeks now and here are some random thoughts and impressions.

  • Since conception the Raspberry Pi fairly quickly became a hacker/tinkerers dream platform. That means there are all kinds of add-ons available, the set up process has gotten drop-dead simple and there are tons of tutorials, blog posts and ideas out there to peruse.
  • The Raspberry Pi 2 model mostly changed in how much power is on the board, so pretty much anything that worked with previous models will work with this one. In some cases you might need an adapter cable to hook up the proto boards or shields, but most stuff is fine.
  • The “NOOBS” set up experience gives you lots of options, including ones geared to specific uses like as a media center PC. I was up and running in no time on the most common distro (Raspbian) which is a version of Debian Linux.
  • The unit doesn’t really like hot-swapping USB very much. I managed to corrupt my first install pretty easily and had to start again. If I understand correctly, part of this is due to using the SD card as your main boot disk, which is much more sensitive to I/O disruption than a traditional hard disk.
  • There are tools that make is easy to pop your SD card into your main computer and make a clone of it when everything is working the way you want, so that is certainly a good idea when working with this unit.
  • The networking stack seems a bit flaky with wireless. I got the highly recommended Edimax nano usb adapter, but I’m still having trouble with getting the unit to respond consistently to SSH or RDP requests. I put in a job to restart networking every hour or so and that seems to have helped.
  • I got the Raspberry Pi camera module and it is extremely easy to work with. Right now I have it taking time-lapse photos of one of my seed starting trays. This tutorial worked great and it’s really simple to get working. More details on this in later posts.

All in all it’s an impressive little piece of engineering, particularly for $35. There are lots of possibilities for automation and monitoring that might be interesting to try on my little hobby farm. Many folks are already using a Pi or Arduino along with sensors to automate plant watering for instance. I bought a couple of moisture sensors that I’m hoping to get hooked up eventually, but as that requires some soldering it involves a bit more time to get up and running. I’m hoping to tackle that next.

1 comment , permalink


Mar 15

How Much for that Microsoft ROM of Windows?

On March 17, Microsoft announced that it would be piloting a program with Xiaomi, a Chinese phone manufacturer, to provide select power users with Windows 10 ROMs for their Xiaomi Android phones. This program is billed as a way to gather information, but there’s a much larger opportunity here for Microsoft. A Windows 10 ROM that can be loaded onto an Android phone (any Android phone) would be a huge boon for Microsoft.

No more would people be limited by the Windows Phone hardware that their carrier decides is worthy of offering on their network. You could simply buy an Android phone (like the Samsung Galaxy S6), connect it to your computer, download a program, and install Windows on your phone. Suddenly every Android phone could be a potential Windows phone and switching operating systems would not require buying a new phone or switching carriers. If you like your phone but not its OS, you can change the OS but keep the phone.

It would be like Cyanogen, but not wedded to the fragmentation of the Android kernel that prevents newer versions of Android from running on old hardware. Instead of just pushing Microsoft services into Android, Microsoft could replace Android altogether. To sweeten the deal, Microsoft could make the Windows installer smart enough to back up a user’s files from Android and restore them into Windows. Suddenly the proposition of switching OSes would become a whole lot more interesting and available.

Unfortunately, there’s no indication that Microsoft is going in this direction, but the announcement of a ROM to flash Windows to an Android phone is good news and provides this promising opportunity.

Source: Ars Technica – Xiaomi and Microsoft to offer Windows 10 conversion for Android phones

2 comments , permalink


Mar 15

Mobile Applications and Customer Service

This past weekend, a friend of mine got an alert on her phone that she had already used 75% of her data allotment for the month. She was only 6 days into the month and she has 3GB of data per month and typically uses a little more than 2GB per month. So she was shocked at this revelation since she had been home on her WiFi for the previous two days. After checking her data usage statistics for the offending program, she discovered that a game she hadn’t played in months was responsible for 90% of the total usage. She promptly emailed the publisher to inform them of the issue and request an explanation. She received a prompt reply from the publisher’s support team informing her that this was a known, high-priority issue but that a fix was still being developed. The suggested recourse was to “force close the app when you aren’t utilizing a WiFi connection to play the game”.

First of all, I want to praise the support staff’s prompt response (less than 3 hours on a Sunday afternoon) because that shows they care. This is a very important aspect of customer service, especially for mobile games that are literally a dime a dozen. By taking the time to respond promptly and attempting to understand a user’s issue, a company can go a long way toward retaining a customer.

However, the actual response fails to appreciate the reality of my friend’s situation.

My friend was heartened to see such a quick response. However, her mood quickly worsened upon reading the content of the response. As it turns out, this extensive and unintended mobile data usage is a known bug for the application. Since mobile data does not grow on trees, a bug that consumes several hundred MB of data per day could end up costing users a significant amount of money. The app was on track to consume nearly 10GB of data by itself over the course of a month. That’s a huge bug and would have cost my friend $60 in data overage fees alone and resulted in her turning off her mobile data for the rest of the month to avoid those charges. If this app is installed in any volume, that’s a lot of money being burned.

Meanwhile, a fix for the issue is “in the works” but without a time frame on when it will be in place, and the “strongly recommended” interim solution is to force close the app, “if you’re looking to curtail mobile data usage”. I don’t know how you can get more obtuse as an organization! That’s equivalent to being on a sinking ship and, instead of saying “Get on a lifeboat”, saying “Please hang on while we try to fix the pumps”! It smacks of a lack of understanding for the customer’s situation and of a defensive strategy toward the company’s bottom line.

A Better Response

Just because you don’t have a fix yet doesn’t mean you can’t stop the bleeding. If a soldier is wounded on the battlefield and needs surgery, the field medic’s job is to stop the bleeding, not to perform the surgery. They need to keep the soldier alive so the soldier can make it to the surgeon. The same is true for mobile games. If the game starts causing adverse side effects, those effects need to be dealt with swiftly so the user doesn’t uninstall the game.

Upon being made aware of the issue, the company should have immediately patched the game to disable the feature consuming excessive data or, if that feature cannot be isolated, disable the game from using mobile data at all for the time being. This, coupled with a popup when the application is opened informing the customer of the issue along with the option to ignore the bug, would be the best way forward in the short term. It would stop the bleeding, reduce angry uninstallations, and allow the fix to be developed and adequately tested.

9 comments , permalink