Sep 15

K2 Check Box List Control – Populating and Saving Data from a WCF Service

There are multiple approaches to implementing multi-value fields in K2 SmartForms. Using the Check Box List control is a very powerful one. This post demonstrates how to use this control with a WCF service to load and save data.

First, we will populate a Check Box List with all the possible values that users will be able to select. Then, we will process and save the selection made by the user. Last, will be to load different selections to the control depending on the user.

For this demo, I have created a new CheckBoxListExample view and added a Check Box List control.


Populating the Control

For data populating purposes, the Check Box List is not much different from other K2 controls. The only thing we need to do is define a data-source along with a Key and a Display property.

Let’s then create a service method that we can use as data-source.

The following method will return a list of data objects. Each of them will be a color that will have both, a Key, and a Name property.

public List GetAllColors()
     var colors = new List();

     colors.Add(new ListData { Key = "B", Name = "Blue" });
     colors.Add(new ListData { Key = "R", Name = "Red" });
     colors.Add(new ListData { Key = "Y", Name = "Yellow" });
     colors.Add(new ListData { Key = "G", Name = "Green" });
     colors.Add(new ListData { Key = "O", Name = "Orange" });

     return colors;

After publishing the service, we add our method to a Smart Object, making sure that the Key and Name properties of our colors are mapped correctly.


Now is time to use the new method as data-source for our Check Box List control.

Let’s open the Configure Data Source dialog (Control Properties -> Data Source section -> Type) and let K2 know that we want to use the new method as data-source. We also want the Key property to be the value of the items, and the Name property to be the text displayed.


After saving the changes and running the view, we can see how the control is populated with the values coming from the service method that we created.


Sending Selected Data

The control is now filled with real data, what enables users to make selections. The next natural step is to store these selections to a more persistent storage system, like a database.

Here is where things get a little tricky, but only a little.  The Check Box List control sends the list of selected values in XML format.

As an example, the XML sent when the colors Blue (B) and Green (G) are selected looks like this:

<?xml version="1.0" encoding="UTF-8"?>
   <object parentid="d7058dc8-e2af-4ebc-85dc-87d918e7585b" parenttype="Object">
         <field name="Key">
         <field name="Key">

The “value” nodes are the ones containing the Key property of the selected options. So we will need to create a method in our service that processes this XML and extract these values.

We can do this in many different ways, but a very simple approach is to use the XmlTextReader class to retrieve these values and put them into something more manageable, like a list of strings. Later, we could use this list to store the values in the database.

The following method will process the selected options taking the mentioned approach.

public void SaveUserColors(string userName, string colorsXML)
      var selection = new List<string>();
      using (XmlTextReader tr = new XmlTextReader(new StringReader(colorsXML)))
          bool canRead = tr.Read();
          while (canRead)
             if (tr.Name == "value")
                 canRead = tr.Read();
                 if (!string.IsNullOrEmpty(tr.Value) && !selection.Contains(tr.Value))
                 canRead = tr.Read();

     SaveUserColorsToDb(userName, selection);


Let’s add now the method to the Smart Object.


Once the Smart Object is deployed is time to call the new method from the view. For this purpose, I have created a Drop-Down List control, to display the different users in our application, and a Save button that will trigger the saving functionality.


A new action to call the Smart Object method will need to be set on the “Clicked”  button rule.


In this call, we need to pass the Check Box List control, along with the Users Drop-Down, as arguments.


After saving the changes and running the view, we can make now a color selection, specify a user in the Drop-Down and send this data to the service by clicking on the Save button. Our service method should take care of processing this data.


Setting User’s Selection

So far we have seen how to populate the Check Box List control and send the colors selected to the service so that they can be stored in a database. Let’s see now how we can set a particular set of options in our control.

To do this, we need to pass a semi-colon separated list with the values of the colors we desire to set in the control. For example, if we want to set Blue and Green in the control, we will need to pass the string “B;G;”.

Knowing this, let’s create a service method that returns a different list of colors,  in the required format, depending on the user specified.

public string GetColorsByUser(string userName)
    var sb = new StringBuilder();

    if (userName == "mdelarosa")
          sb.Append(string.Format("{0};", "B"));
          sb.Append(string.Format("{0};", "G"));
          sb.Append(string.Format("{0};", "Y"));

    return sb.ToString();

Let’s now add the method to the Smart Object.


And let’s make some changes to the UI.  Our view will now have a Populate User Colors button that will set the Check Box List control, to the colors chosen by the user specified in the Drop-Down.


We now call the new Smart Object method from the ‘Clicked’ button rule and pass the user in the parameter.


Finally, we assign to the Check Box List the list of colors returned


After saving the changes and running the view, we can see how the colors set in the control change depending on the user selected.



3 comments , permalink


Sep 15

Our Summer with Parse

We have been using Parse as the backend platform for a Web site and iOS app for a few months now and wanted to share some of our feedback (Pros, Cons, and Gotchas) on a few of the functional areas. My team’s primary focus of the project was the integration of the Website with Parse using the JavaScript SDK.

Parse offers a noSQL database with a web-based UI, called the Data Browser, to support management of the classes and the associated data.
Each record by default is assigned the following fields: objectId (to serve as the row’s unique identifier), createdAt (to capture the date/time of initial creation), updatedAt (to capture the date/time of when the record was last updated), and ACL (to control the read/write permissions for that particular record).

• Parse supports more complex data types such as GeoPoint (for storing the latitude and longitude of a particular location), Array, File (for file storage within Parse), Pointer (the typical Foreign Key relationship) and Relation (to store a one to many relationship).
• Parse can easily extract all of your data. You initiate a request in the Parse Dashboard and Parse emails you a zip file of JSON files – one file per table. I would usually then copy the content of the files into an online JSON viewer to visualize the data better, e.g.
• The Data Browser only allows you to view one class at a time, although it does support easy navigation to other classes if there is a Pointer or Relation on the class. Coming from a SQL Server background I was used to writing queries in Management Studio and seeing multiple results in one window that were sorted by one or more specified columns. This particular project was not as complex as some of my past projects, so navigating between classes and noting the objectIds wasn’t too painful.
• The ability to update values directly in the Parse Dashboard Data Browser is a blessing and a curse. It makes direct inserts and updates quick to execute without having to write query statements, but since changes are committed immediately there is no way to rollback. Use caution when updating a value, deleting a row, or deleting a column since it cannot be undone/rolled back.
• If an object is failing to save without an intuitive message, double check the Class Level Permissions for the object which is an Advanced setting under the Security option. This caught us once because even though the row-level permission was correct, the Write / Update permission at the class level was disabled for some reason and we were unable to update a record. The error message we recevied wasn’t too informative.
Security / Edit Class Level Permissions
Advanced Settings
Cloud Code
Parse supports running code in the Parse Cloud, which they host as Cloud Code. The Cloud function can be invoked from the client SDKs and through the REST API. Parse also supports integration with third parties via the Cloud Code. In our implementation we overwrote the beforeSave and afterSave methods for certain objects in order to take another action when a record was being inserted/updated. We also integrated with Stripe as a payment mechanism.

• The Cloud Code is very easy to deploy to the server via a command line.
• The Parse Dashboard supports viewing the actual file contents of the deployed Cloud Code. Normally, tracking of build versions or files’ modified dates is the only way to infer what version of the code was running on the server. By seeing the actual code, you can confirm if functionality was actually deployed or if it somehow got missed.
• The integration with Stripe was easy to implement with the Cloud Code.
• Logging functionality is present but not as robust as other implementations I have used on .NET projects such as log4net’s ability to log to a rolling file or database for better auditability and monitoring.
• The beforeSave and afterSave methods were helpful when writing code to affect other tables. Occasionally there was a gotcha in that the original record that was being inserted/updated did not have permission to update a record in the other table due to ACL permissions. Luckily you can overwrite that restriction with the useMasterKey call which allows the code to execute under the master user account instead of the user initiating the request.
useMasterKey sample call

Parse supports configuration values at the app level. In our implementation we used a handful of config values to manage variables that could change.

• The Parse Dashboard has an interface which makes it easy to add and update configuration parameters. A config parameter can be any of Parse’s supported data types:
Config Parameter Types
• The config value is available by the system on the next read and is retrieved by a simple get() call.
Retrieving a config value
API Console
The API Console is a great built-in feature to allow you to easily test API calls. No real Pros/Cons come to mind, but just make sure you are using the correct action for what you are trying to execute.
API Console
Parse provides a mechanism to monitor the real-time health of your system in a visual way. For our particular project we just grazed the surface of its capabilities by using the Parse.Analytics.track call to track client-side errors, but we will be looking into this more as our project moves to Production to analyze things such as API Requests, Slow Queries, etc.
Analytics example
• The track method is expecting string values for the keys. In our case we were receiving numeric error codes and tried to pass them to the track method as-is in the array, but it was generating an error on the client-side which we noticed by having the Developer Tools / Console window open. To get around it we had to force the error.code value to a string first:
Analytics call
User Management
Parse automatically handles many of the capabilities necessary for user account management. Actions such as signing up, logging in, resetting a forgotten password, and even integration with Facebook are all built-in. With our implementation we added a requirement for a user’s password to be at least 6 characters via client-side validation.

Misc. Feature
Clone App – this feature was great to have when we started on Phase 2 but did not want to recreate the whole backend from scratch. We were able to clone our main environment to use as a base for our next version effortlessly.
Clone App
Overall we have enjoyed our summer developing with Parse and are looking forward to exploring even more what Parse has to offer. Functionality such as Background Jobs and more advanced Analytics are just a few of the areas we will be looking into for Phase 2. Check out their expansive Documentation on their site for more information.

2 comments , permalink


Jul 15

Restroom Monitor Mark II

Have you ever found yourself in need of answering nature’s secondary call, walking across the office to heed it, only to find that all the stalls are in use? <sarcasm>Being situated at the far end of the office, this was a very serious issue</sarcasm> – or at least I could pretend it was to give myself enough of an excuse to do something about it. If only there was a way to know, before ever leaving your desk, if your call would be able to be answered in peace or not.

The concept is simple – determine the state of a restroom and put it somewhere that it can be checked before heading over. A similar system was installed many years ago using wireless, battery-powered magnetic switches updating a website – but the very obvious boxes were vandalized and required frequent maintenance. My objective was to make something completely invisible (and that wouldn’t make people uncomfortable – as restroom tracking could easily do) and impervious to all but the most malicious sabotage – while providing a seamless way to check the state.

The restrooms in question are private rooms (not just stalls), with their own door and deadbolt – the door is always closed, and the occupancy is determined by the position of the deadbolt, which also has a red/green flag on the front of the door. One of the main concerns about this project was doing it in such a way that it wouldn’t make anyone uncomfortable – which would rapidly kill the project – invasion of privacy lawsuits can be expensive. A wide variety of methods of determining state were considered and discarded:

  • Motion sensor – too much like a camera and bad for long visits, tough to get a definitive state
  • Infrared sensor – too much like a camera and visible
  • Red/green color sensor looking at flag – too much like a camera
  • Magnetic switch or door hinge rotation sensor – can’t tell if the door is locked or not
  • Deadbolt induction sensor – too fragile
  • Switch connected to a Bluetooth dongle to communicate – sitting inside a metal door frame could have connectivity issues and the battery would have to be replaced

I eventually settled on a deadbolt switch designed specifically for commercial installation, wired through the door frame to above the dropped ceiling (isn’t drilling holes in the office walls fun?). The switch sits inside the deadbolt pocket (so is not visible), is designed for industrial usage (so won’t break with repeated use), and is wired so that connectivity is perfect and there are no batteries (so requires no maintenance). The switch I used was this deadbolt pocket switch.


Once you have a way to determine the state of the restroom, the next step is to be able to read the state and send it somewhere. After working with a couple different microcontrollers, I decided to use the Spark Core because of the on-board WiFi and extremely easy development/deployment process. After working with it, I could not recommend it highly enough. After using the phone application to connect it to wifi and tie it to your account, you update the microcontroller by coding the application on their web IDE, then pushing the automatically verified and compiled code to the device over the public internet. It’s one step short of pure magic – and a drastic and welcome change from the microcontrollers I’ve worked with in the past. All that aside, it’s a simple task to have the Core receive input from the switch when it changes, then POST notifications when it receives a changed state from the switch over WiFi to a listening service endpoint. The microcontroller is wired into power from a standard mains to USB power supply – again, removing any dependency on battery maintenance.

I did experiment with using USB battery packs to see what kind of battery life I could get – and ran into an interesting behavior.  The battery packs that automatically turn on do so by monitoring the current dropped across the power pins.  They also automatically turn off when too low a draw is detected – assuming that nothing is actually using the power.  To conserve power draw, I disabled the WiFi on the microcontroller when not actively transmitting a changed switch state.  While this did save energy, it also made the power draw low enough (<10 mA) that the battery pack automatically turned itself off thinking that nothing was plugged in.  To get around this, I wired up a transistor circuit to put a 50 millisecond draw across the power pins (through a resistor) every 7 seconds (suggested by this article).  This was effective in keeping the device on – but the biggest battery pack I could find (20,000 mAh) only lasted about a week.  In the interest of a platform requiring zero maintenance, I instead decided to hook up a USB wall wart power supply and wire that in rather than relying on battery power.


The two grey wires go through the wall, down the frame, and to the two door switches (via Molex connectors for easier maintenance), the lower black cable is power from a USB wall wart, and the upper black cable goes to the indicator lights – more on that later)

On the other side of the POST request, I have a Windows Service running on a server, which is self-hosting both an HTTP endpoint for the microcontroller to call with switch state changes, as well as a Skype for Business platform and user endpoints representing each restroom. Since everyone at Clarity is on our internal IM client all day (Lync/Skype for Business), it’s logical that we’d look there for the state of the restrooms. Since Skype for Business endpoints already have a presence state associated with them that shows red/green, it’s an absolutely perfect fit to have endpoints for each restroom that can be Available or Busy, according to the status of the switch on the physical room. Welcome to the internet of things (or places)!


So that’s all nice and dandy!  Indicators on our computers getting the state of the restroom – that seems good enough.  Yea, I wasn’t happy with “good enough” either – it just wasn’t quite over-the-top enough yet.  Clearly, more was needed.

I printed 3D models of toilets (can I just say how much I love the previous 6 words?) on the office 3D printer (MakerBot 2) using clear plastic filament, and embedded LED lights into the back of them (hot glue to the rescue). Since the microcontroller already knows the state of the switches and is able to put out a very convenient 5 volt current that can drive the LEDs, it was a simple task to wire up the translucent toilets to lit LEDs indicating their respective state – functioning as remote physical indicators for the rooms that could be glanced at before heading down the hallway to the doors themselves.  As a side note, I used an Ethernet cable to go from the microcontroller to the RGB LEDs – 3 power sinks per light plus one shared voltage source needed 7 wires to run to the models, and Ethernet cables are a very convenient 8 strands, and are easily available in an office.  I wired up female connectors from Ethernet ‘extension’ cable to both ends, so that the light is easy to disconnect and can use standard Ethernet cables of whatever length is needed to run the distance without having to re-solder the pins.  In the picture above, it’s the upper black wire that I said I would mention later.


Next, since the Skype for Business endpoints that were showing the presence for the restrooms already support IM very easily (and were coming in to an application that I controlled), why not allow the restroom endpoints to have conversations? I added the ability for the endpoints to respond to inquiries about usage for the day with some basic statistics (from the aforementioned state change data), suggest places for lunch (randomized, suggesting 3 different cuisines from a database of over 50 places in the immediate vicinity), and tell jokes (all of them awful – from a database collection of several thousand).

Some people seem to have a thing about using a restroom when the seat still carries warmth from the last occupant. To accommodate those folks, I added in a ‘cool-down’ period, based on how long it eas occupied. Y’know, because it was absolutely necessary.


Finally, since data was getting sent to the service anyway with each switch change, I set up a database that the historical changes could be written to. This way, we can compile all sorts of utterly useless statistics about restroom usage, preference between the two, peak times of day, etc. What better information is there to offer at quarterly meetings?


And with that, the Pooper Snooper Mark II (er… Restroom Monitor) was born. 

1 comment , permalink


Jul 15

Uploading Files Asynchronously in Internet Explorer 8-10 with Server Responses

I recently had to debug an issue on a client project related to Internet Explorer versions 8-10 not correctly handling errors returned from a server during an asynchronous file upload.  I solved it, but the answer doesn’t appear to be anywhere on the Internet; hence this post.

Anyway, as background, if you wanted to upload a file asynchronously (i.e. without a full page POST-back) from a browser until the advent of XML HTTP Request v2, your best bet was to embed a hidden IFrame on your page and POST-back through it.  This actually works pretty well and is well understood technology.  See here for an example:

What isn’t well understood is how to communicate back to the browser what happened on the back end.  For example, the file uploaded could violate a size limit or other items POSTed with the form might not be kosher.  You want to make sure the user knows this, but you’re using an IFrame.  So you need to read the content of that response in the IFrame using the load event on the IFrame.  You could simply return an error via a non-success response like a 400 or 500 error and read that from the IFrame in JavaScript for processing.  On modern browsers, that works fine.  However, on older browsers like Internet Explorer versions 8-10, a non-success response essentially locks the IFrame from the parent frame.  You’re basically bombarded with “Access is Denied” messages when trying to access any page content from outside of the IFrame via JavaScript in this case.

How to proceed?  With a Hack (unfortunately).

In this instance, your best bet is to leverage a JSON response for both success and failure because of the small payload that’s transmitted under the “text/html” content type.  You need to use the text/html content type so Internet Explorer doesn’t prompt the user to download the file, which is decidedly not what you want.  Below is an example of this.

public JsonResult AddOrderDocument(DocumentFormViewModel viewModel)
    if (!ModelState.IsValid)
        return Json(new ModelStateException(ModelState), "text/html");
        var orderDocument = _orderService.AddOrderDocument(viewModel.OrderId, viewModel.File.FileName, viewModel.File.InputStream, viewModel.Title);

        // setting the return type to "text/html" is a hack that is needed to prevent some versions of IE from prompting the user to download the json
        // since it is returned into an iframe. Some versions of IE11 were behaving this way, though others were okay. Leaving in for safety sake.
        // related post:
        return Json(new OrderDocumentViewModel
            OrderDocumentId = orderDocument.Id,
            DocumentId = orderDocument.DocumentId,
            FileName = orderDocument.Document.Name,
            Title = orderDocument.Title
    catch (ValidationException validationEx)
        return Json(new ModelStateException(ModelState), "text/html");
    catch (Exception ex)
        return Json(ex, "text/html");

With the 200 Success response (even for server-side errors), the content of the JSON is pushed into the IFrame and can then be read from the IFrame like normal.  Then it’s up to your JavaScript to read the content of the IFrame and act accordingly, like the example below:

function submitOrderDocumentForm() {
    var callback = function () {
        var response = $('#documentIFrame').contents().text();
        var jsonResponse = tryParseJSON(response);

        if (jsonResponse) {
            if (!jsonResponse.Message) { //Symbolizes an Exception from C#
                //Do something here
            else {
        else {

        $('#documentIFrame').unbind('load', callback);

    $documentForm.attr('target', 'documentIFrame');
    $('#documentIFrame').bind('load', callback);

1 comment , permalink


Jul 15

Renaming an email attachment on K2 workflow

In my previous blog, I explained how to send email attachments from K2 workflow.

Sending SSRS reports as attachments in K2 workflow

However, you may have noticed that the attachments come with a weird naming format, for ex:

<file><name>MyRequest062a34ce-a8d4-4742-8ba2-fde577a7c297.pdf</name><content>{64basedstring content here}</content></file>

I think this is because of the way SSRS (or K2) try to make the name unique. This is fine and dandy but it doesn’t look great for the person receiving the email. How do we fix this?

Luckily, I figured out a way, or actually 2 ways to do this:

Option 1: Editing code

On the Mail Event, right click and Select View Code -> EventItem.


This will open the xoml file as shown below:


Right click on “Add Attachment” and Select View Code. Go to ProcessAttachments method and edit the file name as shown below:

This option lets you define the name based on a Process Instance variable:


Option 2: Using Inline Function “Create File from Content”

In this option, when you are on the Attachments wizard screen in the Mail Event wizard, instead of adding reference to Smartobject directly, add an Inline function “Create File From Content”.


This will take you to the File Content wizard:


Specify the file name of your choice for the first parameter.

For the Content parameter, navigate to the report and when you have to select the Return Type, choose ReportFile.Content (this is very important else the user will get a corrupt attachment).

I have had mixed results with Option 2. You may have to play around with Data Conversions inline functions if it does not work at the first attempt but the K2 documentation says that the content can be 64 based string, etc.

27 comments , permalink


Jul 15

Accessing SSRS reports via Smart Objects

To communicate with SSRS reports from K2 workflow, you need to have a smart object defined. This is similar to pretty much everything that you access via workflow (for example communicating with database etc). In this blog, I will show you the steps to connect to SSRS (I am assuming you already have SSRS installed on your servers and that K2 is installed with the right plugin-ins to connect to SSRS).

First create a new Smart Object and click on “Add” in the SmartObject section:

In the “Add Service Object Method” wizard screen, click on Browse to open the Context Browser.

Go to Service Object Server(s) -> Service Object Server. Navigate to your Reporting Service node and pick the report type as shown below:


After binding the appropriate input and output parameter, note that the return type is of type “File” (more about this in the later blog)


Click on Finish, deploy the SmartObject and bingo!, you are now ready to access your PDF SmartObject via K2 workflow

20 comments , permalink


Jul 15

Sending SSRS reports as attachments in K2 workflow

I recently ran into a requirement where I had to connect to SSRS and retrieve reports and send them as attachments via K2 Mail event. I looked up online everywhere but I could not find any helpful resources. Hopefully this series of blogs will be useful to someone who might run into something similar:

The first blog discusses about how to setup a Smartobject to access SSRS reports.

Accessing SSRS reports via Smart Objects

The next blog discusses about how to read the Smart Object and send as Attachment:

Create an Activity and add a Mail Event for the activity. On the Attachments Wizard screen, click on Add and click on ellipsis icon to browse to the SmartObject that is configured to read the SSRS report (Clicking on Browse button will access local folders).

This will open the Context Browser window. Go to the Environment tab and navigate to SmartObject server and select the report as highlighted below (click on the report name and not on any child nodes).


Specify any input parameter for the report. On the next screen, you will be asked to select the return type as shown below:


ReportFile.Context is a base64 string format of the document content.

ReportFile.FileName is the name of the file.

ReportFile.XML is an XML wrapped object of the above for ex:

<file><name>MyRequest062a34ce-a8d4-4742-8ba2-fde577a7c297.pdf</name><content>{64basedstring content here}</content></file>

Select the XML format option as the return type.

And you are good to go! You have now configured K2 mail event to send attachments obtained from SSRS reports.

Read my other blog if you want to find a way to change the attachment name.

Renaming File Attachment

2 comments , permalink


Jun 15

Convert Text to UPPER CASE and save to database in List View Smartforms

My goal was to take a user’s input into an editable list view and convert it into upper case. Sounds simple enough, isn’t it? I tried this approach but it didn’t work

Not sure if it was because I was using a later version of K2 Blackpearl (4.6.9) or maybe I was missing something but the article above did guide me in the right direction.

Here are the steps I took in order to accomplish this:

  • Suppose you have a grid with First Name column and you would like to save the value in UPPER CASE.


In the above example, the corresponding control for the First Name column in the Add/Edit Item row is called “First Name Text Box”.

  • Click on the column grid that you want to be displayed as UPPER CASE.
  • Go to Properties and go to Expression property.
  • Create a new expression as shown below:


Note that the input parameter for the ToUpper() is the Add/Edit Row control (and not the control from under the Display Row(s) section.

  • Once set, when the user types in a value in the First Name, when the control loses focus the value is converted to UPPER CASE.

Here is how it looks like when the control is in focus:


And when the control loses focus:


Hope this helps!

1 comment , permalink


Apr 15

Raspberry Pi 2 w/ camera module time lapse video of pepper and tomato sprouts


As I mentioned in my last post, I’m working on a project with the Raspberry Pi 2 and one of the things I’m doing is playing around with the camera module.

This little camera is not bad (similar to a cell phone camera), but it definitely does best at a bit of a distance. Probably 6-10 feet at least. I need to be a bit closer to get enough detail and also due to the limited spacing between the grow lights and the seed trays. I ended up picking up one of those cheap little sets of lenses you can get for cell phones. It’s not going to win fine photography awards, but it’s just fine for my needs. The kit includes a fisheye, wide angle, macro and telephoto lens. Here’s a closeup of one of the lenses in place.

For my initial run of photos I’m using the wide angle, but I’m hoping to experiment with the macro lens on a single plant as well. This macro lens has to be REALLY close to get clear shots, so I’ll have to experiment a bit.

My camera mount is a very primitive holder I threw together out of scrap wood, but it does the job. I’m still tweaking it as I go. It’s about as DIY as it gets.

Here is a wider shot of the seed starting area with the camera mount in place. I have it taking photos every 30 minutes of one of my pepper and tomato seed starting trays.

Here is an initial time lapse video showing some of the seeds sprouting and growing. This was taken over the course of 5 days, March 26-30. I do change the camera position and seed tray position slightly, so it’s a bit jerky in spots. I’m also still figuring out the best settings to use when turning the photos into a video and the best shot frequency to use, so may have some better examples later.

3 comments , permalink


Mar 15

Initial thoughts on Raspberry Pi 2

When the Raspberry Pi first came out a few years back, it seemed like a very interesting idea in theory. A tiny computer for $35, completely self-contained, with built-in Ethernet, HDMI and a couple of USB ports. It peaked my interest briefly, but I never got around to trying it out.

Fast forward to 2015 and there’s a new model with a quad-core processor and more memory, which translates into better/faster video options and a lot more power in general. There are plenty of articles discussing all the ins and outs of the new model, but a couple of things made me take a look this time.

One, Microsoft has promised a version of Windows 10 (out in preview right now) that will run on the unit. This opens up all kinds of possibilities for someone who is already intimately familiar with the Windows development eco-system. I do love working with Linux, but the first part of this sentence is a lie. Guess I just lost any geek cred I was building up. I’ve dabbled in Linux on and off over the years and I think the biggest issue is that I’ve never spent enough time in it to get comfortable. So everything I want to do involves a trip to Google.

Two, my company Clarity is sponsoring a concept called Ship Days this year where each employee is expected to “ship” some little side project during the year. It’s pretty wide open, but could be a mobile app, an Internet of Things project or something you might see at a MakerFaire event. Suffice it to say I won’t be the only one taking a fresh look at the Raspberry Pi platform.

I’ve had the Raspberry Pi 2 for a couple weeks now and here are some random thoughts and impressions.

  • Since conception the Raspberry Pi fairly quickly became a hacker/tinkerers dream platform. That means there are all kinds of add-ons available, the set up process has gotten drop-dead simple and there are tons of tutorials, blog posts and ideas out there to peruse.
  • The Raspberry Pi 2 model mostly changed in how much power is on the board, so pretty much anything that worked with previous models will work with this one. In some cases you might need an adapter cable to hook up the proto boards or shields, but most stuff is fine.
  • The “NOOBS” set up experience gives you lots of options, including ones geared to specific uses like as a media center PC. I was up and running in no time on the most common distro (Raspbian) which is a version of Debian Linux.
  • The unit doesn’t really like hot-swapping USB very much. I managed to corrupt my first install pretty easily and had to start again. If I understand correctly, part of this is due to using the SD card as your main boot disk, which is much more sensitive to I/O disruption than a traditional hard disk.
  • There are tools that make is easy to pop your SD card into your main computer and make a clone of it when everything is working the way you want, so that is certainly a good idea when working with this unit.
  • The networking stack seems a bit flaky with wireless. I got the highly recommended Edimax nano usb adapter, but I’m still having trouble with getting the unit to respond consistently to SSH or RDP requests. I put in a job to restart networking every hour or so and that seems to have helped.
  • I got the Raspberry Pi camera module and it is extremely easy to work with. Right now I have it taking time-lapse photos of one of my seed starting trays. This tutorial worked great and it’s really simple to get working. More details on this in later posts.

All in all it’s an impressive little piece of engineering, particularly for $35. There are lots of possibilities for automation and monitoring that might be interesting to try on my little hobby farm. Many folks are already using a Pi or Arduino along with sensors to automate plant watering for instance. I bought a couple of moisture sensors that I’m hoping to get hooked up eventually, but as that requires some soldering it involves a bit more time to get up and running. I’m hoping to tackle that next.

2 comments , permalink