PAGES

24

Oct 14

K2 Blackpearl: Using Dynamic Escalation Date



Recently we had a scenario where after the client was notified about an activity, we had to send first notification 1 hr after the initial email was sent out and reminder notifications every 2 hrs.

I defined the data varialbles FirstNotification as 1 and ReminderNotification as 2.

So we used the Dynamic Escalation setting that is available on the Event Escalation wizard,  set the Dynamic Date to Add Hours (Now, FirstNotification). 

And used the “and then after” option to set the reminder notifications at ReminderNotification value.

Escalation Window

Seems pretty straightforward right? I thought so too until I noticed the first notification was not being sent out until 3 hrs after the original email. I dug through the entire wizard, rechecked my variables to make sure I wasn’t doing any additional computation. And this is when I figured out what was going on:

Escalation Notes

Here is the URL if you want to extract more details:

http://help.k2.com/onlinehelp/k2blackpearl/userguide/4.6.4/webframe.html#EscW06.html

I am not sure why K2 decided to implement it in this fashion, but in order to get it to working as I wanted, I set the Dynamic date to

Add Hours (Now, FirstNotification – Reminder Notification)

If suppose Now is 01/01/2014 8:00 AM

If you look at this, you may think, hey this will compute to Add Hours(Now, 1 -2) which will be 01/01/2014 7:00 AM but if you remember, it sends the first notification after the extension is completed, so if you add 2hrs to this, it will be 01/01/2014 9:00 AM which is what we wanted to begin with.

On your mark, escalate, GO!!!

0 comments , permalink


24

Oct 14

K2 Blackpearl: Adding URL links to your email message body



Have you run into a scenario where you have to include a URL in the email and when you use <a href> tag in the email body, the generated email link has weird characters (<span> etc) essentially making the URL link invalid. Here are a couple of ways in order to get it to work:

Assume the URL link to use is http://www.google.com?MyId={Variable} where {Variable} is a K2 defined value or data field etc.

Saving the URL in a data variable and passing it into the email body

Create a data event and set the source as:<a href=” http://www.google.com?MyId={Variable}”?Google</a>

If you do not include the a href tags, it will NOT work!

Using the HyperLink inline function

In the email body, you can use the HyperLink function to create URL which will have a href tags automatically built in.

Hyperlink function

The one thing that you must be cautious of is when building the URL as it is in an expression format. So you have to “build” the string and treat any field variables as input variable and append it the expression. For example, the expression for the above URL would be:

http://www.google.com?MyId=” & {Variable}

Attempts that won’t work

Prior to coming up with the above solutions, I had to try various permutations and below is a list of things that will not work. Hopefully, you will not make the same mistakes:

  1. Adding the <a href=” http://www.google.com?MyId={Variable}”?Google</a> directly into the message body:
    MessageBody
  2.  Creating a data event but the source is missing the <a href> tag
  3. Using the URL decode: This will return the URL but if you directly reference the URL into the message body, you have the same issue.

Happy linking!

0 comments , permalink


15

Oct 14

Three JS Introduction For Non Developers



Three.js Jumpstart

Three.js is a cross-browser compatible JavaScript library for creating 3-D graphics on a web browser. Three.js scripts are used with the HTML5 canvas element, SVG, and WebGL to render graphics. Big data and fully immersive interactive experiences are becoming ever more popular and Three.js is a great way to develop web solutions that are easily accessible by mass amounts of users. The source I will be referencing can be found and fiddled with at: http://codepen.io/MichaelMazur/pen/xGKyc .

This CodePen contains a nice triangular shape that rotates on a styled canvas. This should give you a basic understanding on how to create a scene and manipulate the visuals. From this brief post, you will know the basics of setting up a three.js scene, some great resources, and tips and tricks I wish I had known before I started.

Getting the Ball Rolling

To get started go to http://threejs.org/ and download the Three.js starter zip file, which contains the source and some great example code. The basic elements you need to render a blank scene to canvas are a scene, a camera, and a renderer. Once you set those three elements up you must add the renderer to the Document Object Model (DOM) and render the scene.


/// Set up a scene
var scene = new THREE.Scene();
/// Set up a camera
var camera = new THREE.PerspectiveCamera(80, window.innerWidth/window.innerHeight, 0.1, 1000);
/// Specify zoom level, this is optional.
camera.position.z = 5;
///Create renderer and set the size
var renderer = new THREE.WebGLRenderer({alpha:true});
renderer.setSize(window.innerWidth, window.innerHeight);
/// Add the render to the DOM.
document.body.appendChild(renderer.domElement);
//Recursive call to render the scene
var render = function () {
requestAnimationFrame(render);
renderer.render(scene, camera);
};
render();


Now let us add a shape to the scene. You must define the geometry and material in order to render a shape in a scene. Here is a basic example on how to add a triangle wireframe shape.

var geometry = new THREE.SphereGeometry(3, 3, 3);
var material = new THREE.MeshBasicMaterial({color: 0x3f3f3f, wireframe: true});
var shape = new THREE.Mesh(geometry, material);
scene.add(shape);

screenShot

That Is It, Go Nuts!

As you can quickly see, Three.js is a very powerful, well-documented library that is easy to learn. The Three.JS library has been in the forefront of creating 3-D graphics on web browsers, and is the best current solution. Setting up a scene, adding a camera, shapes, and rendering is all you need to get a project rolling; it is that easy. Now that you have the basics of three.JS, go nuts and try making your own awesome visualizations and interfaces!

Hindsight

In closing, I mentioned things I wish I had known. A few things were not very clear to me while getting started:

  1. The orientation plane, some positioning/z-index issues:
    By default, the shapes are rendered to the center of the scene, which is laid out on a Cartesian plane. I found that there is an issue with absolute positioning of elements when you layer images, or text, on top of a scene. The z-index property is supposed to apply to elements that are position:absolute, position:fixed, or position:relative. When I had elements that were position:absolute, their z-index properties were being ignored and made layering difficult. Changing them to position:fixed corrected the issue; so if you see anything out of the ordinary with z-index, play with fixed and absolute positioning.
  2. What to do with a scene after you leave the page or view:
    If you are using multiple pages or views on a site that do not include a rendered three.JS scene, you want to make sure that you are removing the scene from the DOM upon leaving the page or view. I was using Backbone to manage my particular site, so I just removed the whole view from the DOM.
  3. Resizing a scene on a browser window resize:
    With the code I have provided here, there is no resizing. There are quite a few references on the web for resizing a scene with a browser resize. Threex does a great job of handling the re-rendering of scenes with browser window resizes. Look at the code; it is easy to digest. It should help you understand how to handle window resizes: http://learningthreejs.com/data/THREEx/docs/THREEx.WindowResize.html

0 comments , permalink


14

Oct 14

Hardware Prototyping with Arduino: Sensors



Infrared Proximity Breakout - VCNL4000

I was recently tasked with determining an effective method of detecting the removal of a small product from an enclosed space. This was an excellent opportunity to break out the breadboard and multi-meter to construct three quick sensor prototypes and explore their potential benefits and drawbacks. Here is a summary of the sensors that I explored and my impressions of them.

 

Continue reading “Hardware Prototyping with Arduino: Sensors” »

0 comments , permalink


24

Sep 14

The Apple Watch is Not Bringing the Wearable Revolution (but Apple Will Still Sell Millions of Them)



A short while ago, the tech world waited with baited breath for Apple’s unveiling of this year’s iPhone. Along with that announcement, Apple also gifted the world with Apple Pay and the Apple Watch (among other things).

Cue the breathless proclamations that Apple has turned the wearable devices category on its head. Hyperbole aside, the Apple Watch hasn’t even been released yet, nor is it available for pre-order, so making such pronouncements is premature at best.

Apple has met tremendous success with the release of the iPod, iPhone, and iPad. That situation is not up for debate. What is up for debate is the Apple Watch’s chances for joining its larger, older brethren. It is my opinion that the Apple Watch, like other wearables before it, will not put a watch on everyone’s wrist like the iPhone put a smartphone in everyone’s pocket. (I couldn’t resist that hyperbole)

Cost

When it finally releases, the Apple Watch will retail for $349. However, there’s no indication of what that dollar amount will include. Will it include just the (smaller) watch and a basic band? Or will it include either size watch and a band of your choice? Or just the watch but no band whatsoever? Keep in mind that there are two different size watches, 3 different types of watch, and a large assortment of bands for the offing.

Thus, while the Apple Watch no doubt has the same build quality as previous Apple products, that means it’s also a premium product. I imagine that it will bring the thought of owning a watch into the minds of many people who normally don’t wear a watch, but they have to choke down the $349 (minimum) price tag to do it.

Convenience

Couple that price with the fact that watches in that price range are designed to last a considerable amount of time. Personally, I have owned my titanium Citizen watch for 4 years and it still works flawlessly, has nary a scratch on it, and I paid $350 for it. Plus it never needs a battery.

In contrast, a smart watch, like a smart phone, becomes obsolete because its hardware cannot support the increasing demands of its software (an unfortunate corollary to Moore’s law). Most people replace their phones after 16 months, does Apple expect them to do the same with their watch? Keep in mind that the Apple Watch won’t be subsidized like the iPhone.

Capability

The biggest hurdle that the Apple Watch can’t get over (like all smart watches), is it’s designed to work with a companion smart phone. The phone is what is really doing the heavy lifting as far as processing and sensor calculations go. This enables the watch to be lighter and cheaper, but that also means it requires a smart phone to function! If you don’t own a compatible iPhone, you can’t use an Apple Watch because it won’t work without a compatible iPhone.

This is the area that the Apple Watch is most different from its sister Apple i-devices. You can use an iPhone without an iPod or an iPad. This fact alone limits its sales potential because it’s only a companion device. This isn’t just an issue with the Apple Watch, although it’s the most high profile device to have this issue; it’s an issue with all wearables.

As I wrote earlier, the masses would be better served through companion devices (watches, glasses, in-car infotainment systems) that are device agnostic. While that hurts the short-term profits of companies that enter this space with companion products, it also would be the revolution that was the iPhone, iPad, and iPod.

0 comments , permalink


19

Aug 14

A Bridge to Cars



Smartphone OS Tentacles

Recently, there have been major pushes by the makers of smartphone OSes (most notably Google and Apple) into extending their software’s connectivity to other devices. The most commonly cited example is wearables, such as the oft-cited though still vaporware iWatch and the underwhelming Galaxy Gear. Slightly less well known, though no less important, is the push by Apple and Google into the automotive infotainment space through CarPlay and Android Auto respectively.

Both “standards” are designed to provide virtually the same functionality: hands-free calling, turn-by-turn navigation, SMS functionality, and music streaming through a set of pre-approved and heavily curated services and apps. I call them “standards” because they are really just proprietary layers that only operate within the respective Google or Apple ecosystem.

The current incarnation of automotive infotainment systems are mostly driven by the various bluetooth protocols. While bluetooth is certainly capable of information transfer (there are more than 20 bluetooth protocols), it’s not designed to be a high bandwidth platform. As a result, automakers have started to look elsewhere to expand infotainment system capabilities.

Moreover, since the infotainment system is mostly outside of what automakers excel at (i.e. making automobiles), it stands to reason that they would be interested in outsourcing this functionality as much as possible to someone who is a subject matter expert. Enter Google and Apple, who are more than happy to supplant the vendor-specific infotainment systems that exist today with something a bit more standard, although still vendor-specific.

Apple and Google both announced their “solutions” to the “in-car infotainment” problem to much fanfare and also announced a set of partners that overlaps heavily (both carmarkers and aftermarket parts suppliers). We also know that Apple’s CarPlay will only work with Apple iOS (version 7.1 and later and only on the iPhone 5 and later) and Google’s Android Auto will only work with Google’s Android operating system (version L and later).

The Issue with Vendor Lock-In

These developments bring about several new pain points for consumers. The first is that if a consumer buys a car, they now have to purchase an infotainment system specific to their current choice in phone. Conversely, carmakers could include both infotainment experiences (CarPlay and Android Auto) in a single system. Since these capabilities are not compatible, a driver would essentially be paying for an infotainment system that their device can only be partially compatible with. That’s not to speak of families who all own different phones (like mine) where the situation would be even worse.

The second pain point is that this pushes us farther down the road of vendor lock in. If a consumer buys a car with CarPlay because she currently owns an iPhone, she would have to purchase a new iPhone upon replacing her current iPhone for it to continue to work with her car’s infotainment system. The same would be true with Android phones.

While the second point is exactly what Apple and Google want, it’s incredibly anti-consumer and short-sighted. It also presents the issue of support. Most people keep their cars for 4.75 years while the typical smartphone lasts around 16 months. With the pace of smartphone innovation, it’s entirely within the realm of possibility that after those 4.75 years are up, the latest smartphones (which the average consumer will have upgraded 3-4 times) may not support the car’s proprietary capabilities; severely damaging the resale value.

We Need a Better Solution

Rather than vendors who have a vested interest in working alone to tailor a solution that only works on their latest OSes, we need a single, open solution collaborated on by all of the big players in the automotive market (automakers, aftermarket vendors, and smartphone companies). Such a solution may potentially exist in MirrorLink, which leverages open standards to surface a phone’s user interface and provides much of the same functionality as Android Auto and CarPlay, but it’s not proprietary.

Why is an open, standard solution a better way forward? Firstly, the dominant smartphone in one market is not necessarily the dominant smartphone somewhere else. While Google’s Android largely owns the smartphone market worldwide, there are plenty of places where the iPhone has a large market share as well (i.e. the United States). Secondly, if the two companies refuse to work together, consumers end up with vendor-specific solutions that have the potential to hamper adoption. In addition, by limiting the selection of apps that function with a car’s infotainment system, innovation is stymied for both interface design and application development. New players can’t enter the market against established brands because they’ll be unlikely to get over the hurdle posed by the certifications, especially if they compete directly with an existing, certified service or app.

Instead, a system that allows developers to smartly tap into the car’s infotainment system through the smartphone OS in a standardized way (i.e. the same on iOS and Android), while still pulling the cues for interface design from the OS, provides a better way forward for everyone. A way that doesn’t tie your watch or your glasses or your car to your current choice in phone.

1 comment , permalink


22

May 14

Developing for the Shareable Internet



When we think about developing web pages for mobile and desktop use, we often think of making sure that if a user comes to our desktop site on a mobile device we redirect them to the so-called mobile-friendly view instead. This is because we’ve spent a lot of time, treasure, and energy tailoring our mobile-friendly sites to mobile devices. They often have controls tailored to touch input, fewer images, and rely on the width of the viewport to wrap content. We make sure that a user landing on our desktop sites with mobile devices are redirected. However, we don’t pay a whole lot of thought to interactions between users which often causes the opposite case to happen. With the advent of the "Shareable" Internet, we’re increasingly going to need to.

What is the "Shareable" Internet?

If you’ve ever posted a link on Facebook or Twitter or sent someone a link via text message or email then you are a part of the "Shareable" Internet. The Internet thrives on sharing content, concepts, and ideas between users. That’s why it was created after all! Gone are the days when it was a single user consuming and interacting with just content. Now users are interacting (i.e. sharing) with each other.

I’ll get back to an issue the actual sharing raises for developers in a moment. First, I want to talk about where the shared links are coming from. There was an article on CNN Money a few months ago that said U.S. internet usage by mobile devices has surpassed desktop usage for the first time. What are people doing on their mobile devices? They’re consuming content and, because this is what the Internet (note the capital I) is built for, they’re sharing that content anywhere and everywhere from their mobile device.

If a website has a mobile-friendly view and a desktop view, and you’re on a mobile device, you’re likely on the mobile-friendly view. If you then share that page’s URL with your Twitter followers or your Facebook friends or whomever else via the built-in sharing functionality of your device, you’re going to be sharing the mobile-friendly URL (with a few caveats). If I then click on that link from my laptop, I’m going to be taken to the mobile-friendly URL and (without desktop browser detection or Responsive Web Design) I’m going to see the mobile-friendly page, which will likely look horrible on my desktop browser (think no margins, tiny images, and poor page flow).

Below is an example of what I’m talking about. The site uses mobile-browser detection to redirect to a mobile-friendly site if you’re on a phone, but there’s no redirection for the desktop site if you go to the same URL (which is what would have been shared).

wp_ss_20140522_0001
image

What Should We Do?

This is obviously sub-optimal. So what should we, as developers, do to keep the "Shareable" Internet (big I again) humming smoothly? At a minimum we should be directing a desktop user to the desktop version of our site if they somehow end up on the mobile-friendly site. There are also design practices (i.e. Responsive Web Design) that can help combat this by avoiding browser redirection entirely. The approach your site takes doesn’t matter as long as you get your desktop users where they belong when they link to a mobile-friendly page. Simply leaving them on the mobile view without a way back is not an option if you want to keep users happy.

0 comments , permalink


11

Apr 14

The Heartbleed Bug: How Bad Is It Really?



There’s been a lot of digital ink spilled over the past few days regarding the so-called “Heartbleed” bug. The bug should not be confused with a computer virus, which is a malicious program designed to steal or otherwise harm a computer system that may or may not use a security flaw to allow it to do its dirty work.  While calling it a virus may make more sense to end users, it’s also misleading sensationalism.  So now that we have that out of the way, let’s talk about how SSL (which stands for Secure Sockets Layer) actually works in relation to “Heartbleed”, what systems are affected, and what it all means for end users and developers.

SSL Keep-Alive or Heartbeat Requests

SSL is a cryptographic protocol that provides end-to-end security for packets sent over computer networks.  It’s end-to-end in that after there’s an initial request (called a handshake) to the remote server by the client to agree on a “key”, all future traffic is encrypted.  Wikipedia has a great description here: Secure Sockets Layer

Anyway, once the SSL channel is established with the remote server, the client and server can communicate in a secure fashion that is not easily defeated.  Each package of communication is encrypted from the moment it leaves the client until the moment it reaches the server and vice versa.

However, the starting and restarting an SSL channel is relatively time- and resource-intensive.  A lot of messages have to go back and forth as part of the handshake before the actual content can be delivered.  While all of this happens in milliseconds, that’s a lot of time to a computer and as a result the connection can timeout if enough time passes between the first message and the next message.  So the SSL protocol also defines a “heartbeat” or keep-alive request that will maintain the connection and keep it from timing out.

Content of a Heartbeat Request

The heartbeat request is relatively simple.  The client sends a “heartbeat” to the server with two items: a “secret message” and the length of that secret message.  The server is supposed to respond with the complete secret message.  The length argument is in there in case the message gets broken up into multiple smaller messages along the way since that can happen.  So if your client sent the request “Message: Transformers, Length: 12”, the server is supposed to respond with “Message: Transformers”.  It shouldn’t respond with anything else like “Robots in disguise” or “Optimus Prime” or “HailMegatron” because that’s not what the client is expecting to get back.

The “Heartbleed” Bug

On December 31, 2011 (yes, more than 2 years ago) a change was made to the code in the OpenSSL implementation of the SSL protocol.  OpenSSL is an open-source SSL implementation licensed under terms similar to the BSD License in that the source is freely available and changes can be made without restriction or inclusion back into the original source.  OpenSSL is used heavily in most open-source web servers, most notably Apache.

The code changed the way the server validates the reported message length compared to the actual message length.  Instead of simply parroting back the message it received, the server returns back the contents of memory starting with the message for the reported length of the message in what’s known as an overflow bug.  This is a problem because the contents of memory can be anything from passwords to the server’s core certificate as long as it’s been loaded by the current process.  That’s very, very bad.

The length field maxes out at 64,000 so an attacker could read 64KB of memory at a time.  There’s also no limit to how often a heartbeat can be sent, so multiple heartbeats could be sent in rapid succession and the server will respond to all of them.  Thus, a sophisticated attacker could get a near complete mapping of the server’s memory and everything in it.  So it’s not hyperbole to say that systems that use the offending code are incredibly vulnerable and a large amount of (not just user) data could be compromised as a result.

Below is an oversimplification from the XKCD comic of how heartbleed works that provides a healthy amount of FUD.  We’ll go into detail on why this is an oversimplification below.

What Systems are Affected?

So now that you understand how bad Heartbleed is, what websites have been affected?  The good news is the answer isn’t “All of them”.  In fact, the answer might not even be “most of them”.  It will all come down to which sites you use every day.  As I mentioned previously, OpenSSL is the only affected plugin so only systems that use OpenSSL are affected.

Unfortunately, the list of affected companies includes such names as Facebook, Yahoo, Instagram, Twitter, Tumblr, and Google.  All of them used the affected code.  So if you use Facebook, Yahoo Mail, Gmail, etc. then you could be affected.

One thing I want to make clear is no sites running Microsoft’s web server (IIS) are affected, nor are the majority of banks (especially the largest banks e.g. Citi, Bank of America, etc.), or Amazon.com.

You can find an up-to-date listing of affected sites here: Which Sites Have Patched the Heartbleed Bug

How Bad Is It?

It’s bad.  Make no mistake that until systems are patched, it’s going to be incredibly easy for attackers to go after data on those systems.

There are plenty of nightmare scenarios where an attacker gets their hands on the master key for a server (or the master key used to secure other keys).  They’d then be able to listen in and decrypt any traffic coming from that server provided they can insert themselves into the route between the client and a server to capture the traffic.

There are also scenarios where passwords are stolen and hackers run amok with credit card numbers.

How bad is it really?

Leaving all hyperbole aside, it’s unlikely that this attack is going to amount to much other than a bunch of page views and scared users.  I’m not saying that there is no threat, there absolutely is.  But some of the advice of “change all of your passwords now and don’t use the web until the sites you use are patched” is more sensational than practical for the average user, especially since there are limited reports of affected sites actually being attacked.  And of those sites, no one knows what data the attackers actually got or if it was useful.

Let’s step back and look at what is actually vulnerable in this attack: anything within the process’s RAM memory space.  This could be things like:

  • The server’s private master key that it uses to send and receive all data
  • User passwords and other request data stored in RAM

However, a lot of stuff isn’t vulnerable to this bug.  Every modern operating system uses process segmentation to keep other process’s hands out of the proverbial cookie jar that is RAM.  Even though OpenSSL will read the data 64,000 characters past the start of the secret message in memory, it can’t read past the end of the current process’s memory space.  This is a limitation being enforced by the operating system and the vulnerability doesn’t affect that.  So anything running in a separate process is safe.

Additionally, the process doesn’t read from the hard drive to return the heartbeat, so none of that data is at risk either.  An attacker can’t use this vulnerability to steal data stored in a database or movies stored on Netflix.

More importantly, the data an attacker does get back is not going to look as nice and neat as the XKCD example.  Realistically, it’ll just look like a huge jumble of random data.  Without any knowledge of how the process actually works, which an attacker wouldn’t have, there’d be no guaranteed way to identify a server’s master key, say, from a Base64 encoded request.  It’s not like data in memory looks like “Master Key: 1234567890, Password: Password”.  It’s true that many requests to map the memory space will help identify patterns in the memory that would aid in identifying specific variable and consistent aspects (passwords being the former and keys being the latter).

This is why it’s something that needs to be fixed and it’s a good idea to change your password for sites that are affected.  However, the web as a whole isn’t broken and there’s not going to be any drastic shift in user confidence.

Further Reading:

OpenSSL "Heartbleed" bug: what’s at risk on the server and what is not

Information About Heartbleed and IIS

Heartbleed Bug: Everything You Need to Know About OpenSSL Security Flaw

The Programmer Behind Heartbleed Speaks Out: It Was an Accident

0 comments , permalink


19

Feb 14

Comparing Azure Configuration Files in Visual Studio



My current project is 100% Azure-based.  As a result, we make heavy use of configuration settings (.cscfg and .csdef files) to set up our environments and differentiate between local development, development, staging, and production.  Because we’re constantly updating these files and there’s multiple people on my team, we’re constantly needing to resolve conflicts associated with these files.

For the longest time, I would attempt to compare these files and receive the following error: “TF10201: Source control could not start the manual merge tool”.  Before this error would happen, I’d receive a warning that the file was open and I needed to close it to perform the comparison.  Generally the file was open, so I’d say Yes at the prompt to automatically close the file and proceed on with the compare.  Then I would get there error and there wasn’t any way around it except to manually copy over the changes and keep my local copy.  I thought surely the file extension can’t be messing up a simple XML file merge.  After all, I can merge XML files without issue, why would a different file extension matter if the content is still text?

Today, however, the files weren’t open and I still received the error.  Which got me to thinking that the file open message and the error could be caused by the files themselves being held open by Visual Studio.  So I closed the solution and retried the comparison.  Sure enough, the comparison tool opened as expected and worked as designed.

So the moral of the story is: if you’re unable to compare Azure configuration files in Visual Studio, close the associated project(s) first before proceeding with the comparison.

0 comments , permalink


16

Feb 14

(Updated) Review: Lenovo ThinkPad X1 Carbon (2nd Generation)



I just got a brand new, top-of-the-line, second generation Lenovo ThinkPad X1 Carbon to use as my main development machine.  I’m still in the middle of converting over from my old machine, an aging Lenovo ThinkPad T410s, so this review is more about first impressions and initial thoughts than anything else.  I’ll update this review after I’ve used the machine for a few weeks.  One thing I want to make clear is that I’m going to use this machine differently from most users.  I will be using this machine for all forms of software development and really putting it through its paces instead of travelling with it.

Before I get into my initial thoughts, here’s a quick rundown of the new machine:

  • Intel Core i7-4600U @ 2.1GHz (Haswell series)
  • 8GB of PC3-12800 DDR3L RAM running at 1600MHz
  • 256GB Samsung SSD
  • 2560 x 1440 resolution touch screen display

The Good

Here are the things that I like: the weight, the screen, the performance, the included Ethernet dongle, the battery life, and the power adapter.

The Carbon weighs a whopping 2.8 pounds.  That’s not as light as the MacBook Air, but the Carbon offers a “retina” display that you just can’t get with a MacBook Air.

Speaking of displays, the Carbon’s 2560 x 1440 display is gorgeous.  It offers so much more screen real estate than a standard HD display.  The added space shows much more content for programs like Visual Studio which means more lines of code and less scrolling.

In addition to the great screen real estate, the performance is what you would expect from an i7 with an SSD with SATA III.  It appears to have no issues running my normal workflow which includes SQL Server, Visual Studio, and Azure emulators.

Also, it’s nice that the Carbon comes with an Ethernet breakout dongle.  There’s a dedicated micro Ethernet port that the dongle plugs in to.  This is nice, and it’s not like the laptop is thick enough to support a full-size Ethernet port.

All of this is nice, but the battery life makes it even better.  As I type this, I have 81% battery remaining and an estimate of 5.5 hours remaining.  That’s a long time and much longer than any laptop I’ve ever owned.

Along with the battery life, the power adapter is comparatively small and flat.  The fact that it’s flat could be a bad thing, but it fits both ways into the port so it’s not a problem.  Like the Carbon itself, the power adapter is small and light as it should be.  Since the Haswell chips are so power efficient.

The Bad

Unfortunately, the Carbon isn’t all sunshine and rainbows.  Here’s what I don’t like but will learn to live with:

The fingerprint reader is in a good location but doesn’t seem like it picks up my fingerprint all the time.  I’ve already spent several minutes several times trying to login with my fingerprint.  The reader is rather recessed so that could be why it doesn’t pick up my finger all the time.

The touchpad is really smooth so it’s hard to know that you’re accidentally touching it.  The included software keeps me from accidentally clicking or tapping, so that’s nice.  It’s definitely an improvement over the grainy touchpad on the T410s, but now it’s almost too smooth.  Also, there are no physical buttons on the touchpad anymore.  The whole thing functions as one large button with clicking into the lower right-hand corner causing an alternate click instead of a primary click.  I’m not a huge fan of this recent trend in laptops because there’s no indication that you’re in the “zone” for alternate click.  Along with the whole thing being a button, it means I’ve accidentally been moving the mouse when I physically click the touch pad.  This has caused me to miss what I’ve been clicking on a couple of times.

In addition, there aren’t any indicators for battery status (including charging), so you have to rely on Windows for that information.  However, when the laptop is sleeping, the dot in the “I” in ThinkPad on the cover slowly cycles red, which is a clever touch.

Finally, the speakers are on the underside of the case, so you could accidentally muffle them if the machine is on your lap.  They also aren’t the greatest speakers in the world, but I don’t intend to use them often enough to notice.

The Ugly

The Ugly are design decisions that I wish Lenovo hadn’t made but will have to deal with.

Before I got this machine, I read several reviews that really harped on the keyboard design.  I have to confirm that it’s every bit as jarring as the reviews paint it out to be.  The Caps Lock button has been replaced with a split Home/End button.  I use Home and End all the time.  I don’t use Caps Lock, so that being gone will just mean I don’t accidentally shout at people when I’m trying to find the A key.  Also, the paging keys (Page Up and Page Down) are now directly over the left and right arrow keys, respectively.  That’s not a bad placement, but the keys are nearly on top of each other.  I’ve already reached for the arrows several times and accidentally ended up hitting the paging keys instead.

In that same vein, the adaptive function row has symbols that I’ve never seen before and there’s no label for what they actually do, so I’ve had to discover their use.  Most of them have turned out to be useful, but there’s no way to customize the rows to change which keys are in which row or which keys are where.  For example, I’d love to have the function keys and volume controls on the same “row” instead of the function keys along with settings-related keys like connecting to a second monitor or toggling wifi.  There is a key that’s dedicated to flipping through the rows, which is very useful.  In addition, there’s no feedback for when you’ve touched a button on the row like the buzz you get from soft keys in smartphones.  That would be very useful since you’re not actually pressing a physical button anymore.

Moving on to the case, there are also several notable things excluded or poorly placed.  The Carbon comes with a mini-DisplayPort but no dongle to convert that port to a full-size DisplayPort, VGA, or something else.  Since the Ethernet dongle is included, the dongle for the DisplayPort is a notable absence.  Also, the full-size HDMI port seems kind of overkill since DisplayPort can convert to HDMI.  On the subject of ports, the HDMI port could be replaced with an SD card slot easily.

All of these items are mostly minor in the grand scheme of things, but they do highlight areas for improvement.

Summary

I think I’m going to be very happy with this machine once I understand its quirks.  Although I’ve harped a lot on the keyboard, I don’t think it’s going to slow me down very much.  The function keys not having touch feedback when you touch them will likely be my biggest gripe once my touch typing figures out where Home, End, and Delete are.  Other than that, I look forward to using it and seeing how much it increases my productivity.  I would say I’m more than cautiously optimistic, but I’m definitely not over the moon yet.  Lenovo has made a great laptop, I’ll see how it does for development soon.

Update:

I’ve now used the X1 Carbon for a full week as my sole development machine.  I have to say that it’s pretty nice.  It’s more than capable of handling whatever I throw at it.  However, there are some things that I didn’t notice initially that would make it even better.

There are no media function keys (play/pause, fast forward/rewind).  This is a huge letdown.  There are 4 adaptive row sets and not one of them includes media buttons.  I realize this is supposed to be a “business” machine, but I don’t know any business person who doesn’t listen to music or watch videos during downtime.  Not being able to interact with a hardware key is frustrating and something I am definitely going to miss.  Besides, I have laptops that are ten years old that have dedicated media keys.

The screen causes some scaling issues when using external, low-PPI monitors.  This is more of a Windows issue, but because the X1 Carbon’s screen has a high pixel density text is almost impossible to read without using some kind of scaling.  Unfortunately, Windows tries to handle scaling by either making everything the same size across multiple screens or making everything the same number of pixels.  There’s also no way to set scaling on a per-monitor basis, so I either have to have tiny text on the high density or massive text on the low-density.  I wish there were a way to set the scaling on a per-monitor basis.  This isn’t Lenovo’s fault, but it’s something to be aware of.

0 comments , permalink