PAGES

29

Dec 14

Configuring Working Hours in K2 Blackpearl



Recently I ran into a task where I had to configure sending reminder notifications only during business hours. I was used to configuring event escalations for the default setting, so I was a bit stumped on how to do this. Luckily, it isn’t that complicated and K2 provides the necessary things to do this configuration. Let me explain…

<h1>Step 1</h1>

In order to configure your own working hours, you have to create your own Time Zone. To do this, you need access to the K2 Workspace. Once here, navigate to Management Console -> Workflow Server > Working Hours Configuration. If you haven’t setup anything yet, it will tell you nothing has been configured. Right click on this node and select “Add New Zone”. This will open a new window where you can specify your time zone and also what you define your working hours to be. In addition you can include any exceptions (such as holidays) or Special days (such as overtime etc). All this in explained in more detail at this link:

http://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.5/webframe.html#reference-ws_mcon-workflow-workinghours.html

Only thing to caution is when checking the “Is Default Check Box”. When checked, this will impact all instances who are configured to use the Default Server Zone. If you do not want to apply the same setting to all process, leave this unchecked.

<h1>Step 2</h1>

The server part is done. Now let’s get to the portion of configuring the event escalation.

Open your Event Escalation. By default, the “Use default working hours during execution” will be checked and the “Use Server Default Zone” radio button will be selected.

Uncheck the “Use default working hours during execution”. This will now display a Zone field where you can drag and drop your custom zone. Go to Object Browser -> Environment -> Workflow Management Server(s) -> Workflow Management Server -> Zones

Under here, you should be able to see the Zone that you configured in Step 1. Drag and drop it and behold!, you have just completed configuring Working Hours for your event escalation. For more details on various options that are available when specifying zone, check out this link:

https://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.6/webframe.html#daw06.html

Enjoy!

0 comments , permalink


22

Dec 14

K2 Workflow: Custom functions



Recently I ran into a scenario where I had to repeat the same expression in various activities and it would have been really time-consuming had I not run into creating custom functions. To give an example, suppose I had to extract the first name from the Participant Name which is being displayed as “FirstName, LastName”. Then I would have to write an expression similar to:

First item(Split(Replace(input, <spaces>, <empty string>), ‘,’,)

Basically I first replace any spaces with empty string and then split it comma delimited and then extract the first item. Imagine doing this in each activity over and over. Luckily, you have a way to save your own custom function and the way to do it is very simple. When you click on the option to create an expression, click on the little icon as shown in the below link:

https://www.k2.com/onlinehelp/k2blackpearl/userguide/current/webframe.html#save_inline_function-save_function_configuration.html

Check the box to save this custom function. What this does is that it will save this function under the “Saved Functions” section.

Now when you have to reference this function in any other activity, all you have to do is to drag and drop your custom function and you don’t need to repeat the function again and again! Isn’t that cool!

But wait a min, how do I get to specify what the input function values should be. It is such a pain to again drill down to the very nested function to specify the input values. This is where it can be tricky. For my requirement, I had to replace the Participant Name and since this is found on every activity, i didn’t need to substitute the function input values with anything, just use it with what it was already taking.

Another option is to pre-define the input variables, for example, suppose you have to do:

Sum(Square(x), Cube(x))  in multiple places, then define a data field x and set the value of x to what you want it to be before you invoke your custom function.

Making changes to your custom function

OK, so you were able to create your custom function. Now what if I have to change it? Pretty simple.. double click on any place where you referenced it, make the necessary changes and then as shown in the link, check the “Save function configuration”. If you want to use the same name, it will ask if you want to replace it. Click on Yes. This change will now apply across all places where this function is referenced.

Deploying changes to another environment

When you deploy your changes to another environment, you will notice that your function does not show up in Saved Function section. However, this does not mean that your K2 process will fail. What happened is that the logic is referenced inline for each activity. So if you want your custom function to show up as a Saved function in this new environment, open up any activity where you used the custom function, double-click on the function and click on the “Save function configuration”. The custom function will now show up in the Saved Functions section.

Hope you find this as helpful as I did, sure saved a lot of time generating the same expression over and over…..

0 comments , permalink


22

Dec 14

K2 Workflow: Creating sub-processes (aka IPC event)



Have you ever run into a scenario where you found yourself executing a certain number of activities over and over and wished you could have modularized it and wrote it as a separate function, similar to how most of the programming languages do. Well, there is hope in the K2 world and I will try to explain the steps on how to do it. To illustrate it better with an example (this is psuedo code only):

function SubProcess(a, b, c)

{

DoSomething(a,b);

DoSomethingAgain(a,b,c);

return x, y and z;

}

The above function SubProcess takes in the input parameters a, b and c. executes two functions DoSomething and DoSomethingAgain and then returns x, y and z.

How does one achieve this in K2? Using an IPC event.

As in any language, the first thing we need to do it create the function. In K2, this is very similar to creating another K2 process, so I won’t dwell too much into it since I am assuming you are already aware of how to create a process. Once the sub-process is created, deploy it to the server so that the main process can reference it.

Go to you main process, drag an IPC event from the toolbox and you will be provided with the option to specify the process name. Click on Browse and this will display a list of processes available. Select the sub-process from this list. Then specify what value to use as the folio number for this sub-process.

ipc1

You then have the option whether this sub-process is to be executed synchronously or asynchronously. Choose based on your criteria. Then click on Next. You will then come to the “Process Send Field Mappings” wizard screen. This screen basically allows you to map any values that you pass into this sub-process. So in our example, it is a way to specify values for the input parameters for a, b and c. One cool thing that you can use is that if you name your input parameters the same as the parameter names you use in the main process, you can click on Auto-map and the mapping will be automatically done for you (else, you just have to drag and drop the mappings):

ipc2

Click on Next to go to the “Process Return Field mappings”. This is how we map values from the sub-process to the main process, or in our example, a way to return x,y and z to the main process.

ipc3

Click Finish and you are done and you can being using this sub-process as many times as you like!

All this is fine and dandy, but what are other benefits of using a sub-process (other than code reuse). To mention a couple:

  • It keeps the main process from becoming too long and makes it more manageable.
  • Even if the code is not being reused, it is helpful to break into separate modules so that it is better to investigate any issues.
  • And the best part, suppose you have to fix anything in the sub-process, you only need to deploy the sub-process and not the main process! So if any workflow was broken in the sub-process, you have to fix this only and the workflow activity will continue from there..

For more info, check out this link:

http://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.5/webframe.html#ipc_concepts.html

Hope you found this helpful!

0 comments , permalink


20

Nov 14

K2 Workflow: Apply HTML formatting on email events



For a novice, it would seem that applying HTML formatting would be straightforward with the HTML option that is provided in the Email event (or Client event) wizard, for ex:

emailFormat1

However, when I saw the output, I was stumped. The output was something like this:

——————————————-

Dear XYZ,

Please assign a name for task 123. Use the following link to open the worklist item:

Click to open worklist item

———————————-

A couple of things in case you haven’t noticed:

1. The Participant name was not in bold.

2. The task text was italicized but not the variable itself. (By now you would have guessed that any formatting is not being applied to variables)

3. There is no option to give an alternate name to the Worklist item link.

So how can we apply formatting to variables and generally speaking, apply HTML formatting as one wants to? Well, there is a way. If you look closely at the format ribbon, there is an option called Load HTML template:

htmlTemplate

What this allows you to do is apply the actual template as you would have written in a html page. And with this option, you would have to specify everything (for ex: font style, size etc else it will assume default options unlike the previous HTML format option where one could apply formatting changes from the format ribbon.

With this option, you can now do whatever you want, apply formatting to variables, give alternate name to links etc. You can also preview your changes by clicking on the “Preview this message in a new window” link.

You can find more information on K2 site at: http://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.5/webframe.html#reference__-_mail_event_message_body.html

Happy formatting!

1 comment , permalink


7

Nov 14

Sharing Views in MVC – A Quick Start with RazorGenerator.Mvc



Overview

Recently, on a few different projects the opportunity to share views between web sites has arisen. I looked around and found a few different options but the one that stood out the most to me was the NuGet package RazorGenerator.Mvc. This package allows you to compile views into a separate .dll and use them across multiple MVC web projects. However, I did not run across an easy to follow, complete start up guide. In this write up, I’ll lay out the steps to setup RazorGenerator.Mvc as well as some of the pros and cons to this approach.

All screenshots are from Visual Studio 2013.

Quick Start

Step 1: Create an MVC Web Project
Open Visual Studio and create a new project:

Shared Views

Select MVC, change the authentication to whatever you want (I’m going to do No Auth here) and click OK:

Shared Views

Step 2: Add a Second Web Project for Shared Views
If you read the RazorGenerator.Mvc documentation, it will suggest that you add a class library for this next step. I suggest that you add a second MVC project instead. It will give you the solution structure you need, view type ahead, and additional setup you would need to otherwise do out of the box. Remove all of the folders except for Views. Delete all the excess views and the global.asax file until your solution looks similar to this:

Shared Views

Step 3: Add a Reference to the Shared Views Project
Right-click the references in your MVC project, select add reference, and add a reference to your Shared Views project:

Shared Views

Step 4: Add a Reference to RazorGenerator.Mvc
Right click the Shared View project and select ‘Manage NuGet Packages…’. Search for ‘RazorGenerator.Mvc’ online and click install:

Shared Views

When you do this, you’ll see that a new class – RazorGeneratorMvcStart – was added to the App_Start folder.

Step 5: Create a Shared View
Remove the ‘Index.cshtml’ file from the ‘Views > Home’ folder of your MVC project. Add a new ‘Index.cshtml’ view to the ‘Views > Home’ folder of your Shared Views project. Open up the properties window for this new view and in the ‘Custom Tool’ section enter ‘RazorGenerator’:

Shared Views

When you do this, you’ll see that a new class is generated for the Index view:

Shared Views

If you open up this new class, you’ll see the RazorGenerator creates a file similar to how a text template would – the entire view should be represented in this class. Also, if you look at the PageVirtualPathAttribute, you’ll see the mapping that is created to allow your consuming MVC projects to find the view.

Step 6: Running the Application
Once you’ve completed the above steps, you can run the MVC project. Since there is no home index view in the MVC project, you should successfully see the index page from the Shared Views project.

Overriding a Shared View
What if you have an index view in one of your MVC projects that you want to override the shared view? By default, RazorGenerator.Mvc shared views will always override views of the same path/name in consuming MVC projects. In order to prevent this behavior, you need to change the way in which the RazorGenerator is added to the View Engines. In the RazorGeneratorMvcStart.cs file that was added to the Shared Views project, add the RazorGenerator.Mvc engine to the end of the list instead of the start:

Shared Views

This will allow you to override shared views or partial views in consuming MVC projects.

Potential Issue
If you do decide to add a Class Library instead of a second MVC project for your shared views, and you receive the error message “Could not precompile the file ‘ViewName.cshtml‘. Ensure that a genrator declaration exists in the cshtml file.” after compiling, you need to add a web.config with the proper MVC assembly bindings. This can be done by copying the web.config from your main MVC project.

Another way to tell that this is an issue, is if your ViewName.cshtml file includes the text “Could not precompile the file… Ensure that a generator declaration exists in the cshtml file.”:

Shared Views

Final Thoughts

A good use case for this scenario would be if you had two or more websites that share a lot of common views, such as a single company with multiple brands or internal/external versions of a website. This shared library would allow them to share Views/Partial Views/ViewModels across all brands. However, they would all maintain their own assets (images, etc.) and master layout pages giving each site the ability to create a unique look and feel. While it is possible, I’m a little hesitant about including Controllers in this shared library, but there may be a good use case for it. Javascript files, images, and other non-compiled assets will have to be kept in the MVC projects instead of the shared library. This means that if a shared view depends on a specific javacscript file, each site that uses the view will have to include a copy of the javascript file. However, a post build event or something similar can be used to mitigate this concern.

I’ve included the source code for the sample solution below.

Source Code

0 comments , permalink


24

Oct 14

K2 Blackpearl: Using Dynamic Escalation Date



Recently we had a scenario where after the client was notified about an activity, we had to send first notification 1 hr after the initial email was sent out and reminder notifications every 2 hrs.

I defined the data varialbles FirstNotification as 1 and ReminderNotification as 2.

So we used the Dynamic Escalation setting that is available on the Event Escalation wizard,  set the Dynamic Date to Add Hours (Now, FirstNotification). 

And used the “and then after” option to set the reminder notifications at ReminderNotification value.

Escalation Window

Seems pretty straightforward right? I thought so too until I noticed the first notification was not being sent out until 3 hrs after the original email. I dug through the entire wizard, rechecked my variables to make sure I wasn’t doing any additional computation. And this is when I figured out what was going on:

Escalation Notes

Here is the URL if you want to extract more details:

http://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.4/webframe.html#EscW06.html

I am not sure why K2 decided to implement it in this fashion, but in order to get it to working as I wanted, I set the Dynamic date to

Add Hours (Now, FirstNotification – Reminder Notification)

If suppose Now is 01/01/2014 8:00 AM

If you look at this, you may think, hey this will compute to Add Hours(Now, 1 -2) which will be 01/01/2014 7:00 AM but if you remember, it sends the first notification after the extension is completed, so if you add 2hrs to this, it will be 01/01/2014 9:00 AM which is what we wanted to begin with.

On your mark, escalate, GO!!!

0 comments , permalink


24

Oct 14

K2 Blackpearl: Adding URL links to your email message body



Have you run into a scenario where you have to include a URL in the email and when you use <a href> tag in the email body, the generated email link has weird characters (<span> etc) essentially making the URL link invalid. Here are a couple of ways in order to get it to work:

Assume the URL link to use is http://www.google.com?MyId={Variable} where {Variable} is a K2 defined value or data field etc.

Saving the URL in a data variable and passing it into the email body

Create a data event and set the source as:<a href=” http://www.google.com?MyId={Variable}”?Google</a>

If you do not include the a href tags, it will NOT work!

Using the HyperLink inline function

In the email body, you can use the HyperLink function to create URL which will have a href tags automatically built in.

Hyperlink function

The one thing that you must be cautious of is when building the URL as it is in an expression format. So you have to “build” the string and treat any field variables as input variable and append it the expression. For example, the expression for the above URL would be:

http://www.google.com?MyId=” & {Variable}

Attempts that won’t work

Prior to coming up with the above solutions, I had to try various permutations and below is a list of things that will not work. Hopefully, you will not make the same mistakes:

  1. Adding the <a href=” http://www.google.com?MyId={Variable}”?Google</a> directly into the message body:
    MessageBody
  2.  Creating a data event but the source is missing the <a href> tag
  3. Using the URL decode: This will return the URL but if you directly reference the URL into the message body, you have the same issue.

Happy linking!

0 comments , permalink


15

Oct 14

Three JS Introduction For Non Developers



Three.js Jumpstart

Three.js is a cross-browser compatible JavaScript library for creating 3-D graphics on a web browser. Three.js scripts are used with the HTML5 canvas element, SVG, and WebGL to render graphics. Big data and fully immersive interactive experiences are becoming ever more popular and Three.js is a great way to develop web solutions that are easily accessible by mass amounts of users. The source I will be referencing can be found and fiddled with at: http://codepen.io/MichaelMazur/pen/xGKyc .

This CodePen contains a nice triangular shape that rotates on a styled canvas. This should give you a basic understanding on how to create a scene and manipulate the visuals. From this brief post, you will know the basics of setting up a three.js scene, some great resources, and tips and tricks I wish I had known before I started.

Getting the Ball Rolling

To get started go to http://threejs.org/ and download the Three.js starter zip file, which contains the source and some great example code. The basic elements you need to render a blank scene to canvas are a scene, a camera, and a renderer. Once you set those three elements up you must add the renderer to the Document Object Model (DOM) and render the scene.


/// Set up a scene
var scene = new THREE.Scene();
/// Set up a camera
var camera = new THREE.PerspectiveCamera(80, window.innerWidth/window.innerHeight, 0.1, 1000);
/// Specify zoom level, this is optional.
camera.position.z = 5;
///Create renderer and set the size
var renderer = new THREE.WebGLRenderer({alpha:true});
renderer.setSize(window.innerWidth, window.innerHeight);
/// Add the render to the DOM.
document.body.appendChild(renderer.domElement);
//Recursive call to render the scene
var render = function () {
requestAnimationFrame(render);
renderer.render(scene, camera);
};
render();


Now let us add a shape to the scene. You must define the geometry and material in order to render a shape in a scene. Here is a basic example on how to add a triangle wireframe shape.

var geometry = new THREE.SphereGeometry(3, 3, 3);
var material = new THREE.MeshBasicMaterial({color: 0x3f3f3f, wireframe: true});
var shape = new THREE.Mesh(geometry, material);
scene.add(shape);

screenShot

That Is It, Go Nuts!

As you can quickly see, Three.js is a very powerful, well-documented library that is easy to learn. The Three.JS library has been in the forefront of creating 3-D graphics on web browsers, and is the best current solution. Setting up a scene, adding a camera, shapes, and rendering is all you need to get a project rolling; it is that easy. Now that you have the basics of three.JS, go nuts and try making your own awesome visualizations and interfaces!

Hindsight

In closing, I mentioned things I wish I had known. A few things were not very clear to me while getting started:

  1. The orientation plane, some positioning/z-index issues:
    By default, the shapes are rendered to the center of the scene, which is laid out on a Cartesian plane. I found that there is an issue with absolute positioning of elements when you layer images, or text, on top of a scene. The z-index property is supposed to apply to elements that are position:absolute, position:fixed, or position:relative. When I had elements that were position:absolute, their z-index properties were being ignored and made layering difficult. Changing them to position:fixed corrected the issue; so if you see anything out of the ordinary with z-index, play with fixed and absolute positioning.
  2. What to do with a scene after you leave the page or view:
    If you are using multiple pages or views on a site that do not include a rendered three.JS scene, you want to make sure that you are removing the scene from the DOM upon leaving the page or view. I was using Backbone to manage my particular site, so I just removed the whole view from the DOM.
  3. Resizing a scene on a browser window resize:
    With the code I have provided here, there is no resizing. There are quite a few references on the web for resizing a scene with a browser resize. Threex does a great job of handling the re-rendering of scenes with browser window resizes. Look at the code; it is easy to digest. It should help you understand how to handle window resizes: http://learningthreejs.com/data/THREEx/docs/THREEx.WindowResize.html

0 comments , permalink


14

Oct 14

Hardware Prototyping with Arduino: Sensors



Infrared Proximity Breakout - VCNL4000

I was recently tasked with determining an effective method of detecting the removal of a small product from an enclosed space. This was an excellent opportunity to break out the breadboard and multi-meter to construct three quick sensor prototypes and explore their potential benefits and drawbacks. Here is a summary of the sensors that I explored and my impressions of them.

 

Continue reading “Hardware Prototyping with Arduino: Sensors” »

0 comments , permalink


24

Sep 14

The Apple Watch is Not Bringing the Wearable Revolution (but Apple Will Still Sell Millions of Them)



A short while ago, the tech world waited with baited breath for Apple’s unveiling of this year’s iPhone. Along with that announcement, Apple also gifted the world with Apple Pay and the Apple Watch (among other things).

Cue the breathless proclamations that Apple has turned the wearable devices category on its head. Hyperbole aside, the Apple Watch hasn’t even been released yet, nor is it available for pre-order, so making such pronouncements is premature at best.

Apple has met tremendous success with the release of the iPod, iPhone, and iPad. That situation is not up for debate. What is up for debate is the Apple Watch’s chances for joining its larger, older brethren. It is my opinion that the Apple Watch, like other wearables before it, will not put a watch on everyone’s wrist like the iPhone put a smartphone in everyone’s pocket. (I couldn’t resist that hyperbole)

Cost

When it finally releases, the Apple Watch will retail for $349. However, there’s no indication of what that dollar amount will include. Will it include just the (smaller) watch and a basic band? Or will it include either size watch and a band of your choice? Or just the watch but no band whatsoever? Keep in mind that there are two different size watches, 3 different types of watch, and a large assortment of bands for the offing.

Thus, while the Apple Watch no doubt has the same build quality as previous Apple products, that means it’s also a premium product. I imagine that it will bring the thought of owning a watch into the minds of many people who normally don’t wear a watch, but they have to choke down the $349 (minimum) price tag to do it.

Convenience

Couple that price with the fact that watches in that price range are designed to last a considerable amount of time. Personally, I have owned my titanium Citizen watch for 4 years and it still works flawlessly, has nary a scratch on it, and I paid $350 for it. Plus it never needs a battery.

In contrast, a smart watch, like a smart phone, becomes obsolete because its hardware cannot support the increasing demands of its software (an unfortunate corollary to Moore’s law). Most people replace their phones after 16 months, does Apple expect them to do the same with their watch? Keep in mind that the Apple Watch won’t be subsidized like the iPhone.

Capability

The biggest hurdle that the Apple Watch can’t get over (like all smart watches), is it’s designed to work with a companion smart phone. The phone is what is really doing the heavy lifting as far as processing and sensor calculations go. This enables the watch to be lighter and cheaper, but that also means it requires a smart phone to function! If you don’t own a compatible iPhone, you can’t use an Apple Watch because it won’t work without a compatible iPhone.

This is the area that the Apple Watch is most different from its sister Apple i-devices. You can use an iPhone without an iPod or an iPad. This fact alone limits its sales potential because it’s only a companion device. This isn’t just an issue with the Apple Watch, although it’s the most high profile device to have this issue; it’s an issue with all wearables.

As I wrote earlier, the masses would be better served through companion devices (watches, glasses, in-car infotainment systems) that are device agnostic. While that hurts the short-term profits of companies that enter this space with companion products, it also would be the revolution that was the iPhone, iPad, and iPod.

0 comments , permalink