Apr 15

Raspberry Pi 2 w/ camera module time lapse video of pepper and tomato sprouts

As I mentioned in my last post, I’m working on a project with the Raspberry Pi 2 and one of the things I’m doing is playing around with the camera module.

This little camera is not bad (similar to a cell phone camera), but it definitely does best at a bit of a distance. Probably 6-10 feet at least. I need to be a bit closer to get enough detail and also due to the limited spacing between the grow lights and the seed trays. I ended up picking up one of those cheap little sets of lenses you can get for cell phones. It’s not going to win fine photography awards, but it’s just fine for my needs. The kit includes a fisheye, wide angle, macro and telephoto lens. Here’s a closeup of one of the lenses in place.

For my initial run of photos I’m using the wide angle, but I’m hoping to experiment with the macro lens on a single plant as well. This macro lens has to be REALLY close to get clear shots, so I’ll have to experiment a bit.

My camera mount is a very primitive holder I threw together out of scrap wood, but it does the job. I’m still tweaking it as I go. It’s about as DIY as it gets.

Here is a wider shot of the seed starting area with the camera mount in place. I have it taking photos every 30 minutes of one of my pepper and tomato seed starting trays.

Here is an initial time lapse video showing some of the seeds sprouting and growing. This was taken over the course of 5 days, March 26-30. I do change the camera position and seed tray position slightly, so it’s a bit jerky in spots. I’m also still figuring out the best settings to use when turning the photos into a video and the best shot frequency to use, so may have some better examples later.

0 comments , permalink


Mar 15

Initial thoughts on Raspberry Pi 2

When the Raspberry Pi first came out a few years back, it seemed like a very interesting idea in theory. A tiny computer for $35, completely self-contained, with built-in Ethernet, HDMI and a couple of USB ports. It peaked my interest briefly, but I never got around to trying it out.

Fast forward to 2015 and there’s a new model with a quad-core processor and more memory, which translates into better/faster video options and a lot more power in general. There are plenty of articles discussing all the ins and outs of the new model, but a couple of things made me take a look this time.

One, Microsoft has promised a version of Windows 10 (out in preview right now) that will run on the unit. This opens up all kinds of possibilities for someone who is already intimately familiar with the Windows development eco-system. I do love working with Linux, but the first part of this sentence is a lie. Guess I just lost any geek cred I was building up. I’ve dabbled in Linux on and off over the years and I think the biggest issue is that I’ve never spent enough time in it to get comfortable. So everything I want to do involves a trip to Google.

Two, my company Clarity is sponsoring a concept called Ship Days this year where each employee is expected to “ship” some little side project during the year. It’s pretty wide open, but could be a mobile app, an Internet of Things project or something you might see at a MakerFaire event. Suffice it to say I won’t be the only one taking a fresh look at the Raspberry Pi platform.

I’ve had the Raspberry Pi 2 for a couple weeks now and here are some random thoughts and impressions.

  • Since conception the Raspberry Pi fairly quickly became a hacker/tinkerers dream platform. That means there are all kinds of add-ons available, the set up process has gotten drop-dead simple and there are tons of tutorials, blog posts and ideas out there to peruse.
  • The Raspberry Pi 2 model mostly changed in how much power is on the board, so pretty much anything that worked with previous models will work with this one. In some cases you might need an adapter cable to hook up the proto boards or shields, but most stuff is fine.
  • The “NOOBS” set up experience gives you lots of options, including ones geared to specific uses like as a media center PC. I was up and running in no time on the most common distro (Raspbian) which is a version of Debian Linux.
  • The unit doesn’t really like hot-swapping USB very much. I managed to corrupt my first install pretty easily and had to start again. If I understand correctly, part of this is due to using the SD card as your main boot disk, which is much more sensitive to I/O disruption than a traditional hard disk.
  • There are tools that make is easy to pop your SD card into your main computer and make a clone of it when everything is working the way you want, so that is certainly a good idea when working with this unit.
  • The networking stack seems a bit flaky with wireless. I got the highly recommended Edimax nano usb adapter, but I’m still having trouble with getting the unit to respond consistently to SSH or RDP requests. I put in a job to restart networking every hour or so and that seems to have helped.
  • I got the Raspberry Pi camera module and it is extremely easy to work with. Right now I have it taking time-lapse photos of one of my seed starting trays. This tutorial worked great and it’s really simple to get working. More details on this in later posts.

All in all it’s an impressive little piece of engineering, particularly for $35. There are lots of possibilities for automation and monitoring that might be interesting to try on my little hobby farm. Many folks are already using a Pi or Arduino along with sensors to automate plant watering for instance. I bought a couple of moisture sensors that I’m hoping to get hooked up eventually, but as that requires some soldering it involves a bit more time to get up and running. I’m hoping to tackle that next.

0 comments , permalink


Mar 15

How Much for that Microsoft ROM of Windows?

On March 17, Microsoft announced that it would be piloting a program with Xiaomi, a Chinese phone manufacturer, to provide select power users with Windows 10 ROMs for their Xiaomi Android phones. This program is billed as a way to gather information, but there’s a much larger opportunity here for Microsoft. A Windows 10 ROM that can be loaded onto an Android phone (any Android phone) would be a huge boon for Microsoft.

No more would people be limited by the Windows Phone hardware that their carrier decides is worthy of offering on their network. You could simply buy an Android phone (like the Samsung Galaxy S6), connect it to your computer, download a program, and install Windows on your phone. Suddenly every Android phone could be a potential Windows phone and switching operating systems would not require buying a new phone or switching carriers. If you like your phone but not its OS, you can change the OS but keep the phone.

It would be like Cyanogen, but not wedded to the fragmentation of the Android kernel that prevents newer versions of Android from running on old hardware. Instead of just pushing Microsoft services into Android, Microsoft could replace Android altogether. To sweeten the deal, Microsoft could make the Windows installer smart enough to back up a user’s files from Android and restore them into Windows. Suddenly the proposition of switching OSes would become a whole lot more interesting and available.

Unfortunately, there’s no indication that Microsoft is going in this direction, but the announcement of a ROM to flash Windows to an Android phone is good news and provides this promising opportunity.

Source: Ars Technica – Xiaomi and Microsoft to offer Windows 10 conversion for Android phones

0 comments , permalink


Mar 15

Mobile Applications and Customer Service

This past weekend, a friend of mine got an alert on her phone that she had already used 75% of her data allotment for the month. She was only 6 days into the month and she has 3GB of data per month and typically uses a little more than 2GB per month. So she was shocked at this revelation since she had been home on her WiFi for the previous two days. After checking her data usage statistics for the offending program, she discovered that a game she hadn’t played in months was responsible for 90% of the total usage. She promptly emailed the publisher to inform them of the issue and request an explanation. She received a prompt reply from the publisher’s support team informing her that this was a known, high-priority issue but that a fix was still being developed. The suggested recourse was to “force close the app when you aren’t utilizing a WiFi connection to play the game”.

First of all, I want to praise the support staff’s prompt response (less than 3 hours on a Sunday afternoon) because that shows they care. This is a very important aspect of customer service, especially for mobile games that are literally a dime a dozen. By taking the time to respond promptly and attempting to understand a user’s issue, a company can go a long way toward retaining a customer.

However, the actual response fails to appreciate the reality of my friend’s situation.

My friend was heartened to see such a quick response. However, her mood quickly worsened upon reading the content of the response. As it turns out, this extensive and unintended mobile data usage is a known bug for the application. Since mobile data does not grow on trees, a bug that consumes several hundred MB of data per day could end up costing users a significant amount of money. The app was on track to consume nearly 10GB of data by itself over the course of a month. That’s a huge bug and would have cost my friend $60 in data overage fees alone and resulted in her turning off her mobile data for the rest of the month to avoid those charges. If this app is installed in any volume, that’s a lot of money being burned.

Meanwhile, a fix for the issue is “in the works” but without a time frame on when it will be in place, and the “strongly recommended” interim solution is to force close the app, “if you’re looking to curtail mobile data usage”. I don’t know how you can get more obtuse as an organization! That’s equivalent to being on a sinking ship and, instead of saying “Get on a lifeboat”, saying “Please hang on while we try to fix the pumps”! It smacks of a lack of understanding for the customer’s situation and of a defensive strategy toward the company’s bottom line.

A Better Response

Just because you don’t have a fix yet doesn’t mean you can’t stop the bleeding. If a soldier is wounded on the battlefield and needs surgery, the field medic’s job is to stop the bleeding, not to perform the surgery. They need to keep the soldier alive so the soldier can make it to the surgeon. The same is true for mobile games. If the game starts causing adverse side effects, those effects need to be dealt with swiftly so the user doesn’t uninstall the game.

Upon being made aware of the issue, the company should have immediately patched the game to disable the feature consuming excessive data or, if that feature cannot be isolated, disable the game from using mobile data at all for the time being. This, coupled with a popup when the application is opened informing the customer of the issue along with the option to ignore the bug, would be the best way forward in the short term. It would stop the bleeding, reduce angry uninstallations, and allow the fix to be developed and adequately tested.

0 comments , permalink


Feb 15

Thoughts on Using the Microsoft Kinect for Windows v1

Recently, I had the opportunity to put in a few extra hours working on one of our projects here at Clarity that leverages Microsoft’s Kinect for Windows device. I own an Xbox 360 and an Xbox One, so I’m very familiar with the capabilities (and limitations) of Kinect. At home, I use Kinect to drive my entire entertainment system by voice. The only instance where I have to use a remote is while using Windows Media Center on my Xbox 360.

With that said, I have never had the pleasure of programming against one. All of the projects I have been involved with previously (both at Clarity and elsewhere) have been server-based web applications. So I was thrilled to have the opportunity to leverage what I consider to be one of the coolest pieces of consumer-grade tech ever released.

What Were You Doing?

Part of this project uses the Kinect to identify a variety of gestures like jumping, spinning, or shaking. Using the Kinect’s skeleton tracking makes identifying where the primary user is at any given time a breeze. Where the issues come in are that gestures are not based on point in time positions but a series of actions done in concert over a small time interval. So it’s necessary to keep track of a series of skeletons over the course of the gesture to make sure each subset of the gesture is acted accordingly in concert.

I was working in concert on different gestures from a counterpart in Croatia. We both started at the same time and didn’t share code until after we were done. As a result, we used very different strategies to track a gesture. My counterpart broke each gesture into component subgestures. So for jumping that would be standing, body moving vertically, and then body falling. Putting those three subgestures together corresponds to a single “jump” gesture.

On the other hand, I captured the specific joint positions for each frame for a set window of time (3 seconds) and then looked at each frame in concert to verify that the gesture was correct. This was especially useful for detecting spinning because each subsequent frame is dependent upon the frame before it.

The Kinect is not without its limitations though.  Since it’s a set of 2D cameras attempting to imitate 3D, it has difficulty detecting which way you’re facing (toward or away).  For a gesture like spinning, this is problematic since the moment you’re facing away from the sensor your body gets flipped in the sensor (i.e. your left shoulder is detected as your right shoulder when facing away from the camera).  This is a limitation of the v1 detection software because it doesn’t distinguish faces as part of skeleton tracking.

How Easy Was It?

Leveraging the Kinect to create gestures was surprisingly simple and straightforward for such a complex piece of technology. You get the set of skeletons for the given frame and then analyze the position of the desired joints of each skeleton to see if it meets the gesture requirements. All of this is outlined in the sample programs so I was able to go from nothing to successfully detecting gestures in several hours.

It was refreshing to have such an efficient experience. A lot of esoteric technology requires significant configuration just to get it into a working state. Also, the majority of the clever or difficult use cases are not covered in the samples requiring a significant amount trial and error or searching for solutions. I really hope to use the Kinect again in future project work.

0 comments , permalink


Dec 14

Configuring Working Hours in K2 Blackpearl

Recently I ran into a task where I had to configure sending reminder notifications only during business hours. I was used to configuring event escalations for the default setting, so I was a bit stumped on how to do this. Luckily, it isn’t that complicated and K2 provides the necessary things to do this configuration. Let me explain…

<h1>Step 1</h1>

In order to configure your own working hours, you have to create your own Time Zone. To do this, you need access to the K2 Workspace. Once here, navigate to Management Console -> Workflow Server > Working Hours Configuration. If you haven’t setup anything yet, it will tell you nothing has been configured. Right click on this node and select “Add New Zone”. This will open a new window where you can specify your time zone and also what you define your working hours to be. In addition you can include any exceptions (such as holidays) or Special days (such as overtime etc). All this in explained in more detail at this link:

Only thing to caution is when checking the “Is Default Check Box”. When checked, this will impact all instances who are configured to use the Default Server Zone. If you do not want to apply the same setting to all process, leave this unchecked.

<h1>Step 2</h1>

The server part is done. Now let’s get to the portion of configuring the event escalation.

Open your Event Escalation. By default, the “Use default working hours during execution” will be checked and the “Use Server Default Zone” radio button will be selected.

Uncheck the “Use default working hours during execution”. This will now display a Zone field where you can drag and drop your custom zone. Go to Object Browser -> Environment -> Workflow Management Server(s) -> Workflow Management Server -> Zones

Under here, you should be able to see the Zone that you configured in Step 1. Drag and drop it and behold!, you have just completed configuring Working Hours for your event escalation. For more details on various options that are available when specifying zone, check out this link:


0 comments , permalink


Dec 14

K2 Workflow: Custom functions

Recently I ran into a scenario where I had to repeat the same expression in various activities and it would have been really time-consuming had I not run into creating custom functions. To give an example, suppose I had to extract the first name from the Participant Name which is being displayed as “FirstName, LastName”. Then I would have to write an expression similar to:

First item(Split(Replace(input, <spaces>, <empty string>), ‘,’,)

Basically I first replace any spaces with empty string and then split it comma delimited and then extract the first item. Imagine doing this in each activity over and over. Luckily, you have a way to save your own custom function and the way to do it is very simple. When you click on the option to create an expression, click on the little icon as shown in the below link:

Check the box to save this custom function. What this does is that it will save this function under the “Saved Functions” section.

Now when you have to reference this function in any other activity, all you have to do is to drag and drop your custom function and you don’t need to repeat the function again and again! Isn’t that cool!

But wait a min, how do I get to specify what the input function values should be. It is such a pain to again drill down to the very nested function to specify the input values. This is where it can be tricky. For my requirement, I had to replace the Participant Name and since this is found on every activity, i didn’t need to substitute the function input values with anything, just use it with what it was already taking.

Another option is to pre-define the input variables, for example, suppose you have to do:

Sum(Square(x), Cube(x))  in multiple places, then define a data field x and set the value of x to what you want it to be before you invoke your custom function.

Making changes to your custom function

OK, so you were able to create your custom function. Now what if I have to change it? Pretty simple.. double click on any place where you referenced it, make the necessary changes and then as shown in the link, check the “Save function configuration”. If you want to use the same name, it will ask if you want to replace it. Click on Yes. This change will now apply across all places where this function is referenced.

Deploying changes to another environment

When you deploy your changes to another environment, you will notice that your function does not show up in Saved Function section. However, this does not mean that your K2 process will fail. What happened is that the logic is referenced inline for each activity. So if you want your custom function to show up as a Saved function in this new environment, open up any activity where you used the custom function, double-click on the function and click on the “Save function configuration”. The custom function will now show up in the Saved Functions section.

Hope you find this as helpful as I did, sure saved a lot of time generating the same expression over and over…..

0 comments , permalink


Dec 14

K2 Workflow: Creating sub-processes (aka IPC event)

Have you ever run into a scenario where you found yourself executing a certain number of activities over and over and wished you could have modularized it and wrote it as a separate function, similar to how most of the programming languages do. Well, there is hope in the K2 world and I will try to explain the steps on how to do it. To illustrate it better with an example (this is psuedo code only):

function SubProcess(a, b, c)




return x, y and z;


The above function SubProcess takes in the input parameters a, b and c. executes two functions DoSomething and DoSomethingAgain and then returns x, y and z.

How does one achieve this in K2? Using an IPC event.

As in any language, the first thing we need to do it create the function. In K2, this is very similar to creating another K2 process, so I won’t dwell too much into it since I am assuming you are already aware of how to create a process. Once the sub-process is created, deploy it to the server so that the main process can reference it.

Go to you main process, drag an IPC event from the toolbox and you will be provided with the option to specify the process name. Click on Browse and this will display a list of processes available. Select the sub-process from this list. Then specify what value to use as the folio number for this sub-process.


You then have the option whether this sub-process is to be executed synchronously or asynchronously. Choose based on your criteria. Then click on Next. You will then come to the “Process Send Field Mappings” wizard screen. This screen basically allows you to map any values that you pass into this sub-process. So in our example, it is a way to specify values for the input parameters for a, b and c. One cool thing that you can use is that if you name your input parameters the same as the parameter names you use in the main process, you can click on Auto-map and the mapping will be automatically done for you (else, you just have to drag and drop the mappings):


Click on Next to go to the “Process Return Field mappings”. This is how we map values from the sub-process to the main process, or in our example, a way to return x,y and z to the main process.


Click Finish and you are done and you can being using this sub-process as many times as you like!

All this is fine and dandy, but what are other benefits of using a sub-process (other than code reuse). To mention a couple:

  • It keeps the main process from becoming too long and makes it more manageable.
  • Even if the code is not being reused, it is helpful to break into separate modules so that it is better to investigate any issues.
  • And the best part, suppose you have to fix anything in the sub-process, you only need to deploy the sub-process and not the main process! So if any workflow was broken in the sub-process, you have to fix this only and the workflow activity will continue from there..

For more info, check out this link:

Hope you found this helpful!

0 comments , permalink


Nov 14

K2 Workflow: Apply HTML formatting on email events

For a novice, it would seem that applying HTML formatting would be straightforward with the HTML option that is provided in the Email event (or Client event) wizard, for ex:


However, when I saw the output, I was stumped. The output was something like this:


Dear XYZ,

Please assign a name for task 123. Use the following link to open the worklist item:

Click to open worklist item


A couple of things in case you haven’t noticed:

1. The Participant name was not in bold.

2. The task text was italicized but not the variable itself. (By now you would have guessed that any formatting is not being applied to variables)

3. There is no option to give an alternate name to the Worklist item link.

So how can we apply formatting to variables and generally speaking, apply HTML formatting as one wants to? Well, there is a way. If you look closely at the format ribbon, there is an option called Load HTML template:


What this allows you to do is apply the actual template as you would have written in a html page. And with this option, you would have to specify everything (for ex: font style, size etc else it will assume default options unlike the previous HTML format option where one could apply formatting changes from the format ribbon.

With this option, you can now do whatever you want, apply formatting to variables, give alternate name to links etc. You can also preview your changes by clicking on the “Preview this message in a new window” link.

You can find more information on K2 site at:

Happy formatting!

1 comment , permalink


Nov 14

Sharing Views in MVC – A Quick Start with RazorGenerator.Mvc


Recently, on a few different projects the opportunity to share views between web sites has arisen. I looked around and found a few different options but the one that stood out the most to me was the NuGet package RazorGenerator.Mvc. This package allows you to compile views into a separate .dll and use them across multiple MVC web projects. However, I did not run across an easy to follow, complete start up guide. In this write up, I’ll lay out the steps to setup RazorGenerator.Mvc as well as some of the pros and cons to this approach.

All screenshots are from Visual Studio 2013.

Quick Start

Step 1: Create an MVC Web Project
Open Visual Studio and create a new project:

Shared Views

Select MVC, change the authentication to whatever you want (I’m going to do No Auth here) and click OK:

Shared Views

Step 2: Add a Second Web Project for Shared Views
If you read the RazorGenerator.Mvc documentation, it will suggest that you add a class library for this next step. I suggest that you add a second MVC project instead. It will give you the solution structure you need, view type ahead, and additional setup you would need to otherwise do out of the box. Remove all of the folders except for Views. Delete all the excess views and the global.asax file until your solution looks similar to this:

Shared Views

Step 3: Add a Reference to the Shared Views Project
Right-click the references in your MVC project, select add reference, and add a reference to your Shared Views project:

Shared Views

Step 4: Add a Reference to RazorGenerator.Mvc
Right click the Shared View project and select ‘Manage NuGet Packages…’. Search for ‘RazorGenerator.Mvc’ online and click install:

Shared Views

When you do this, you’ll see that a new class – RazorGeneratorMvcStart – was added to the App_Start folder.

Step 5: Create a Shared View
Remove the ‘Index.cshtml’ file from the ‘Views > Home’ folder of your MVC project. Add a new ‘Index.cshtml’ view to the ‘Views > Home’ folder of your Shared Views project. Open up the properties window for this new view and in the ‘Custom Tool’ section enter ‘RazorGenerator’:

Shared Views

When you do this, you’ll see that a new class is generated for the Index view:

Shared Views

If you open up this new class, you’ll see the RazorGenerator creates a file similar to how a text template would – the entire view should be represented in this class. Also, if you look at the PageVirtualPathAttribute, you’ll see the mapping that is created to allow your consuming MVC projects to find the view.

Step 6: Running the Application
Once you’ve completed the above steps, you can run the MVC project. Since there is no home index view in the MVC project, you should successfully see the index page from the Shared Views project.

Overriding a Shared View
What if you have an index view in one of your MVC projects that you want to override the shared view? By default, RazorGenerator.Mvc shared views will always override views of the same path/name in consuming MVC projects. In order to prevent this behavior, you need to change the way in which the RazorGenerator is added to the View Engines. In the RazorGeneratorMvcStart.cs file that was added to the Shared Views project, add the RazorGenerator.Mvc engine to the end of the list instead of the start:

Shared Views

This will allow you to override shared views or partial views in consuming MVC projects.

Potential Issue
If you do decide to add a Class Library instead of a second MVC project for your shared views, and you receive the error message “Could not precompile the file ‘ViewName.cshtml‘. Ensure that a genrator declaration exists in the cshtml file.” after compiling, you need to add a web.config with the proper MVC assembly bindings. This can be done by copying the web.config from your main MVC project.

Another way to tell that this is an issue, is if your ViewName.cshtml file includes the text “Could not precompile the file… Ensure that a generator declaration exists in the cshtml file.”:

Shared Views

Final Thoughts

A good use case for this scenario would be if you had two or more websites that share a lot of common views, such as a single company with multiple brands or internal/external versions of a website. This shared library would allow them to share Views/Partial Views/ViewModels across all brands. However, they would all maintain their own assets (images, etc.) and master layout pages giving each site the ability to create a unique look and feel. While it is possible, I’m a little hesitant about including Controllers in this shared library, but there may be a good use case for it. Javascript files, images, and other non-compiled assets will have to be kept in the MVC projects instead of the shared library. This means that if a shared view depends on a specific javacscript file, each site that uses the view will have to include a copy of the javascript file. However, a post build event or something similar can be used to mitigate this concern.

I’ve included the source code for the sample solution below.

Source Code

0 comments , permalink