PAGES

11

Apr 14

The Heartbleed Bug: How Bad Is It Really?



There’s been a lot of digital ink spilled over the past few days regarding the so-called “Heartbleed” bug. The bug should not be confused with a computer virus, which is a malicious program designed to steal or otherwise harm a computer system that may or may not use a security flaw to allow it to do its dirty work.  While calling it a virus may make more sense to end users, it’s also misleading sensationalism.  So now that we have that out of the way, let’s talk about how SSL (which stands for Secure Sockets Layer) actually works in relation to “Heartbleed”, what systems are affected, and what it all means for end users and developers.

SSL Keep-Alive or Heartbeat Requests

SSL is a cryptographic protocol that provides end-to-end security for packets sent over computer networks.  It’s end-to-end in that after there’s an initial request (called a handshake) to the remote server by the client to agree on a “key”, all future traffic is encrypted.  Wikipedia has a great description here: Secure Sockets Layer

Anyway, once the SSL channel is established with the remote server, the client and server can communicate in a secure fashion that is not easily defeated.  Each package of communication is encrypted from the moment it leaves the client until the moment it reaches the server and vice versa.

However, the starting and restarting an SSL channel is relatively time- and resource-intensive.  A lot of messages have to go back and forth as part of the handshake before the actual content can be delivered.  While all of this happens in milliseconds, that’s a lot of time to a computer and as a result the connection can timeout if enough time passes between the first message and the next message.  So the SSL protocol also defines a “heartbeat” or keep-alive request that will maintain the connection and keep it from timing out.

Content of a Heartbeat Request

The heartbeat request is relatively simple.  The client sends a “heartbeat” to the server with two items: a “secret message” and the length of that secret message.  The server is supposed to respond with the complete secret message.  The length argument is in there in case the message gets broken up into multiple smaller messages along the way since that can happen.  So if your client sent the request “Message: Transformers, Length: 12”, the server is supposed to respond with “Message: Transformers”.  It shouldn’t respond with anything else like “Robots in disguise” or “Optimus Prime” or “HailMegatron” because that’s not what the client is expecting to get back.

The “Heartbleed” Bug

On December 31, 2011 (yes, more than 2 years ago) a change was made to the code in the OpenSSL implementation of the SSL protocol.  OpenSSL is an open-source SSL implementation licensed under terms similar to the BSD License in that the source is freely available and changes can be made without restriction or inclusion back into the original source.  OpenSSL is used heavily in most open-source web servers, most notably Apache.

The code changed the way the server validates the reported message length compared to the actual message length.  Instead of simply parroting back the message it received, the server returns back the contents of memory starting with the message for the reported length of the message in what’s known as an overflow bug.  This is a problem because the contents of memory can be anything from passwords to the server’s core certificate as long as it’s been loaded by the current process.  That’s very, very bad.

The length field maxes out at 64,000 so an attacker could read 64KB of memory at a time.  There’s also no limit to how often a heartbeat can be sent, so multiple heartbeats could be sent in rapid succession and the server will respond to all of them.  Thus, a sophisticated attacker could get a near complete mapping of the server’s memory and everything in it.  So it’s not hyperbole to say that systems that use the offending code are incredibly vulnerable and a large amount of (not just user) data could be compromised as a result.

Below is an oversimplification from the XKCD comic of how heartbleed works that provides a healthy amount of FUD.  We’ll go into detail on why this is an oversimplification below.

What Systems are Affected?

So now that you understand how bad Heartbleed is, what websites have been affected?  The good news is the answer isn’t “All of them”.  In fact, the answer might not even be “most of them”.  It will all come down to which sites you use every day.  As I mentioned previously, OpenSSL is the only affected plugin so only systems that use OpenSSL are affected.

Unfortunately, the list of affected companies includes such names as Facebook, Yahoo, Instagram, Twitter, Tumblr, and Google.  All of them used the affected code.  So if you use Facebook, Yahoo Mail, Gmail, etc. then you could be affected.

One thing I want to make clear is no sites running Microsoft’s web server (IIS) are affected, nor are the majority of banks (especially the largest banks e.g. Citi, Bank of America, etc.), or Amazon.com.

You can find an up-to-date listing of affected sites here: Which Sites Have Patched the Heartbleed Bug

How Bad Is It?

It’s bad.  Make no mistake that until systems are patched, it’s going to be incredibly easy for attackers to go after data on those systems.

There are plenty of nightmare scenarios where an attacker gets their hands on the master key for a server (or the master key used to secure other keys).  They’d then be able to listen in and decrypt any traffic coming from that server provided they can insert themselves into the route between the client and a server to capture the traffic.

There are also scenarios where passwords are stolen and hackers run amok with credit card numbers.

How bad is it really?

Leaving all hyperbole aside, it’s unlikely that this attack is going to amount to much other than a bunch of page views and scared users.  I’m not saying that there is no threat, there absolutely is.  But some of the advice of “change all of your passwords now and don’t use the web until the sites you use are patched” is more sensational than practical for the average user, especially since there are limited reports of affected sites actually being attacked.  And of those sites, no one knows what data the attackers actually got or if it was useful.

Let’s step back and look at what is actually vulnerable in this attack: anything within the process’s RAM memory space.  This could be things like:

  • The server’s private master key that it uses to send and receive all data
  • User passwords and other request data stored in RAM

However, a lot of stuff isn’t vulnerable to this bug.  Every modern operating system uses process segmentation to keep other process’s hands out of the proverbial cookie jar that is RAM.  Even though OpenSSL will read the data 64,000 characters past the start of the secret message in memory, it can’t read past the end of the current process’s memory space.  This is a limitation being enforced by the operating system and the vulnerability doesn’t affect that.  So anything running in a separate process is safe.

Additionally, the process doesn’t read from the hard drive to return the heartbeat, so none of that data is at risk either.  An attacker can’t use this vulnerability to steal data stored in a database or movies stored on Netflix.

More importantly, the data an attacker does get back is not going to look as nice and neat as the XKCD example.  Realistically, it’ll just look like a huge jumble of random data.  Without any knowledge of how the process actually works, which an attacker wouldn’t have, there’d be no guaranteed way to identify a server’s master key, say, from a Base64 encoded request.  It’s not like data in memory looks like “Master Key: 1234567890, Password: Password”.  It’s true that many requests to map the memory space will help identify patterns in the memory that would aid in identifying specific variable and consistent aspects (passwords being the former and keys being the latter).

This is why it’s something that needs to be fixed and it’s a good idea to change your password for sites that are affected.  However, the web as a whole isn’t broken and there’s not going to be any drastic shift in user confidence.

Further Reading:

OpenSSL "Heartbleed" bug: what’s at risk on the server and what is not

Information About Heartbleed and IIS

Heartbleed Bug: Everything You Need to Know About OpenSSL Security Flaw

The Programmer Behind Heartbleed Speaks Out: It Was an Accident

0 comments , permalink


19

Feb 14

Comparing Azure Configuration Files in Visual Studio



My current project is 100% Azure-based.  As a result, we make heavy use of configuration settings (.cscfg and .csdef files) to set up our environments and differentiate between local development, development, staging, and production.  Because we’re constantly updating these files and there’s multiple people on my team, we’re constantly needing to resolve conflicts associated with these files.

For the longest time, I would attempt to compare these files and receive the following error: “TF10201: Source control could not start the manual merge tool”.  Before this error would happen, I’d receive a warning that the file was open and I needed to close it to perform the comparison.  Generally the file was open, so I’d say Yes at the prompt to automatically close the file and proceed on with the compare.  Then I would get there error and there wasn’t any way around it except to manually copy over the changes and keep my local copy.  I thought surely the file extension can’t be messing up a simple XML file merge.  After all, I can merge XML files without issue, why would a different file extension matter if the content is still text?

Today, however, the files weren’t open and I still received the error.  Which got me to thinking that the file open message and the error could be caused by the files themselves being held open by Visual Studio.  So I closed the solution and retried the comparison.  Sure enough, the comparison tool opened as expected and worked as designed.

So the moral of the story is: if you’re unable to compare Azure configuration files in Visual Studio, close the associated project(s) first before proceeding with the comparison.

Comments Off , permalink


16

Feb 14

(Updated) Review: Lenovo ThinkPad X1 Carbon (2nd Generation)



I just got a brand new, top-of-the-line, second generation Lenovo ThinkPad X1 Carbon to use as my main development machine.  I’m still in the middle of converting over from my old machine, an aging Lenovo ThinkPad T410s, so this review is more about first impressions and initial thoughts than anything else.  I’ll update this review after I’ve used the machine for a few weeks.  One thing I want to make clear is that I’m going to use this machine differently from most users.  I will be using this machine for all forms of software development and really putting it through its paces instead of travelling with it.

Before I get into my initial thoughts, here’s a quick rundown of the new machine:

  • Intel Core i7-4600U @ 2.1GHz (Haswell series)
  • 8GB of PC3-12800 DDR3L RAM running at 1600MHz
  • 256GB Samsung SSD
  • 2560 x 1440 resolution touch screen display

The Good

Here are the things that I like: the weight, the screen, the performance, the included Ethernet dongle, the battery life, and the power adapter.

The Carbon weighs a whopping 2.8 pounds.  That’s not as light as the MacBook Air, but the Carbon offers a “retina” display that you just can’t get with a MacBook Air.

Speaking of displays, the Carbon’s 2560 x 1440 display is gorgeous.  It offers so much more screen real estate than a standard HD display.  The added space shows much more content for programs like Visual Studio which means more lines of code and less scrolling.

In addition to the great screen real estate, the performance is what you would expect from an i7 with an SSD with SATA III.  It appears to have no issues running my normal workflow which includes SQL Server, Visual Studio, and Azure emulators.

Also, it’s nice that the Carbon comes with an Ethernet breakout dongle.  There’s a dedicated micro Ethernet port that the dongle plugs in to.  This is nice, and it’s not like the laptop is thick enough to support a full-size Ethernet port.

All of this is nice, but the battery life makes it even better.  As I type this, I have 81% battery remaining and an estimate of 5.5 hours remaining.  That’s a long time and much longer than any laptop I’ve ever owned.

Along with the battery life, the power adapter is comparatively small and flat.  The fact that it’s flat could be a bad thing, but it fits both ways into the port so it’s not a problem.  Like the Carbon itself, the power adapter is small and light as it should be.  Since the Haswell chips are so power efficient.

The Bad

Unfortunately, the Carbon isn’t all sunshine and rainbows.  Here’s what I don’t like but will learn to live with:

The fingerprint reader is in a good location but doesn’t seem like it picks up my fingerprint all the time.  I’ve already spent several minutes several times trying to login with my fingerprint.  The reader is rather recessed so that could be why it doesn’t pick up my finger all the time.

The touchpad is really smooth so it’s hard to know that you’re accidentally touching it.  The included software keeps me from accidentally clicking or tapping, so that’s nice.  It’s definitely an improvement over the grainy touchpad on the T410s, but now it’s almost too smooth.  Also, there are no physical buttons on the touchpad anymore.  The whole thing functions as one large button with clicking into the lower right-hand corner causing an alternate click instead of a primary click.  I’m not a huge fan of this recent trend in laptops because there’s no indication that you’re in the “zone” for alternate click.  Along with the whole thing being a button, it means I’ve accidentally been moving the mouse when I physically click the touch pad.  This has caused me to miss what I’ve been clicking on a couple of times.

In addition, there aren’t any indicators for battery status (including charging), so you have to rely on Windows for that information.  However, when the laptop is sleeping, the dot in the “I” in ThinkPad on the cover slowly cycles red, which is a clever touch.

Finally, the speakers are on the underside of the case, so you could accidentally muffle them if the machine is on your lap.  They also aren’t the greatest speakers in the world, but I don’t intend to use them often enough to notice.

The Ugly

The Ugly are design decisions that I wish Lenovo hadn’t made but will have to deal with.

Before I got this machine, I read several reviews that really harped on the keyboard design.  I have to confirm that it’s every bit as jarring as the reviews paint it out to be.  The Caps Lock button has been replaced with a split Home/End button.  I use Home and End all the time.  I don’t use Caps Lock, so that being gone will just mean I don’t accidentally shout at people when I’m trying to find the A key.  Also, the paging keys (Page Up and Page Down) are now directly over the left and right arrow keys, respectively.  That’s not a bad placement, but the keys are nearly on top of each other.  I’ve already reached for the arrows several times and accidentally ended up hitting the paging keys instead.

In that same vein, the adaptive function row has symbols that I’ve never seen before and there’s no label for what they actually do, so I’ve had to discover their use.  Most of them have turned out to be useful, but there’s no way to customize the rows to change which keys are in which row or which keys are where.  For example, I’d love to have the function keys and volume controls on the same “row” instead of the function keys along with settings-related keys like connecting to a second monitor or toggling wifi.  There is a key that’s dedicated to flipping through the rows, which is very useful.  In addition, there’s no feedback for when you’ve touched a button on the row like the buzz you get from soft keys in smartphones.  That would be very useful since you’re not actually pressing a physical button anymore.

Moving on to the case, there are also several notable things excluded or poorly placed.  The Carbon comes with a mini-DisplayPort but no dongle to convert that port to a full-size DisplayPort, VGA, or something else.  Since the Ethernet dongle is included, the dongle for the DisplayPort is a notable absence.  Also, the full-size HDMI port seems kind of overkill since DisplayPort can convert to HDMI.  On the subject of ports, the HDMI port could be replaced with an SD card slot easily.

All of these items are mostly minor in the grand scheme of things, but they do highlight areas for improvement.

Summary

I think I’m going to be very happy with this machine once I understand its quirks.  Although I’ve harped a lot on the keyboard, I don’t think it’s going to slow me down very much.  The function keys not having touch feedback when you touch them will likely be my biggest gripe once my touch typing figures out where Home, End, and Delete are.  Other than that, I look forward to using it and seeing how much it increases my productivity.  I would say I’m more than cautiously optimistic, but I’m definitely not over the moon yet.  Lenovo has made a great laptop, I’ll see how it does for development soon.

Update:

I’ve now used the X1 Carbon for a full week as my sole development machine.  I have to say that it’s pretty nice.  It’s more than capable of handling whatever I throw at it.  However, there are some things that I didn’t notice initially that would make it even better.

There are no media function keys (play/pause, fast forward/rewind).  This is a huge letdown.  There are 4 adaptive row sets and not one of them includes media buttons.  I realize this is supposed to be a “business” machine, but I don’t know any business person who doesn’t listen to music or watch videos during downtime.  Not being able to interact with a hardware key is frustrating and something I am definitely going to miss.  Besides, I have laptops that are ten years old that have dedicated media keys.

The screen causes some scaling issues when using external, low-PPI monitors.  This is more of a Windows issue, but because the X1 Carbon’s screen has a high pixel density text is almost impossible to read without using some kind of scaling.  Unfortunately, Windows tries to handle scaling by either making everything the same size across multiple screens or making everything the same number of pixels.  There’s also no way to set scaling on a per-monitor basis, so I either have to have tiny text on the high density or massive text on the low-density.  I wish there were a way to set the scaling on a per-monitor basis.  This isn’t Lenovo’s fault, but it’s something to be aware of.

Comments Off , permalink


31

Jan 14

Development: To VM or Not to VM? That is the Question



As a software developer, there are a ton of different ways to develop new and exciting code.  You can use open source tools, tools directly from the vendor, or a combination thereof. More importantly, thanks to the rise of the internet and source control, most software development doesn’t require the developer physically be in a certain place to actually write the software, that development can be done from anywhere. This is one of the major reasons why development is offshored, outsourced, and/or contracted out. You can make the creation and maintenance of software as distributed as the Internet that it runs on.

That’s all fine and good but what system should you, as a developer, actually develop on. This isn’t a blog about Mac vs. Window vs. Linux vs. Punch Card. Instead, I’d like to discuss the merits of doing development within the confines of a Virtual Machine (VM) versus using a local machine. (By local machine, I mean a single operating system running on the bare hardware that is physically with you in space and time.) There are several inherent tradeoffs that come from developing locally versus in a VM.

It’s a Wonderful VM (or "You Can’t Take It With You")

If you’ve ever seen the movie It’s a Wonderful Life, you likely remember the scene where George is standing on the bridge ready to give up on his life and commit suicide. This is a lot like developing on a local machine (stay with me). Sooner or later, that machine is going to need to be upgraded or replaced, especially if it’s a laptop. Like death and taxes, it’s just a fact of life. Unless all of your source code and all of your applications and the configuration elements for all of those items are in the cloud or somewhere else outside of your local machine, chances are that you’re going to have the new machine hiccups for a couple of days.

The new machine hiccups are where you get a new machine and have to install and configure it exactly how you wanted it. No matter how careful you are, you’re still likely to miss something at a crucial moment that will sap your productivity for the next 5-10 minutes while you fix it (i.e. a hiccup).

Anyway, if you’re using a VM, getting a new local machine doesn’t matter. You just copy the VM to the new machine, start it up, and it’s like nothing ever happened. If you’re one of the lucky few who has a cloud-hosted VM, you don’t even need to move the VM to the new machine. You just open your favorite remote desktop application and carry on. The immediate productivity bonus here related to using a VM cannot be understated, especially if you’re persnickety about your setup.

Like One of the Core Tenants of Object-Oriented Programming (OOP), Everything is Self-Contained

This is more of a corollary to the point above, but I think it deserves its own point as well. Along with moving to a new environment, if your current machine suffers a colossal meltdown, with a VM you can restore the underlying physical hardware and operating system/hypervisor and not notice nearly as much downtime as you’d face with setting up that new environment from scratch. This is another huge advantage of using VMs.

Along with the fact that everything is self-contained so you can restore from backup, everything is self-contained so you can share the VM with the rest of your team in an instant. I can’t count how many times I’ve built a VM for someone. There’s no worry about reimaging a machine or installing lots of licensed software and finding license keys either.

Lastly, it’s a great way to keep clients self-contained. I’ve routinely come across cases where a client will provide a license key to a specific piece of software. If I used that software for other projects for other clients, I’d be breaching a bunch of contracts. So it’s sometimes best to have a VM per client or project along with a core VM that you use to clone from when you start a new project.

Read the System Requirements

The last major advantage of developing on a VM over a local machine are system requirements. I don’t know many developers who have server operating systems installed on their local machine. Server software just isn’t designed to be used as an end-user system. It’s also incredibly expensive when installed on a physical machine. There are a lot of applications out there that need to be installed on server operating systems (I’m looking at you SharePoint!). While it might be technically possible to install server-class software on an end-user machine, it’s not a good idea. Typically the software is crippled in some way or it won’t even install if it sees the wrong configuration settings. Therefore, it’s best to read the system requirements for the software and install it on an operating system that isn’t your local machine’s. In this case, the only option is to use a VM.

Conversely, if you’re developing an application that doesn’t require server software, it’s much easier to just install all of that software locally and be done with it. You don’t have to fuss with fighting your hypervisor for memory or remembering to back up your VM locally.

Speed is Your Ally

All of the above being said, there are also significant advantages that a local machine provides. The biggest one, though this is being narrowed with each passing year, is speed. There’s no comparison between the speed of software running in a local machine versus software running virtualized. The local software is always going to run faster than the virtualized software. So if you don’t have a very powerful development machine or speed is of paramount importance, it’s probably not your best move to use a VM for development unless your VM is in the cloud.

Know How to Use Your Virtualization Software!

Another major reason you would want to stick with local machine development is virtualization software. If you don’t know what you’re doing, you can easily hobble your VM by setting it up the wrong way or failing to enable all of the virtualization features of your machine in the BIOS. A scalpel in the hands of someone who isn’t trained how to use it might as well be a machete. So if you don’t really know how to use your virtualization software to get the maximum performance out of a VM, enable copy-paste across a VM and your local machine, or set up an internal network between VMs and your local machine, then look it up or perhaps virtualization isn’t for you.

In my old job, I had a machine that was relatively underpowered by the time I was due for a replacement. Before the upgrade, I had no choice but to virtualize my development because I was developing in SharePoint, but, at the same time, I didn’t have a whole lot of power to virtualize with. I did a bunch of research to figure out how best to leverage the virtualization features and functionality my machine did have. Most of my VMs ran faster on my machine than coworker’s VMs did on their brand new machines with 4 times the amount of RAM I had because I knew what I was doing.

Conclusion

There are plenty of merits to developing locally versus in a VM. As I said, at my previous company, I exclusively used a VM for development and it was wonderful. Here at Clarity, the majority of us develop on local machines and that’s not so bad either. It’s mostly a question of developer preference. So if you’re a developer and you’re trying to decide between local vs. VM I hope I’ve provided some insight and things to think about. If you’re a manager or decision maker trying to decide what you’d like to favor, I hope that this blog has been insightful. What gotchas have you faced when you moved between the two or what did I miss when describing the trade-offs?

Comments Off , permalink


27

Nov 13

Custom Code Snippets



We love our jobs as software developers for all of the interesting and unique challenges that we get to face on a day to day basis.  We get excited learning a brand new technology that just came out.   We all love the opportunity to architect an elegant new framework that we get to build our entire application on.  We embrace the chance to implement algorithms we learned in a Theory class, but now actually apply to our day to day work.  Unfortunately for us, there are also aspects of development that can become monotonous.  Our goal should be to find tools that enable us to complete these boring, yet necessary pieces as efficiently as possible so that we can move onto more interesting problems.  This is where Custom Code Snippets come in.

There are a number of times over the course of a project that, we find ourselves writing code that is extremely repetitive, yet for one reason or another can’t be simplified into a central implementation.  One of the best examples of this is properties on a class.  In some cases there can be tens or even hundreds of properties in a class.  All of them look something like:

public int MyProperty { get; set; }

OR

private int myVar;
public int MyProperty
{
 get { return myVar;}
 set { myVar = value;}
}

I think that most developers who have used C# and Visual Studio for a while probably take advantage of being able to type prop [TAB] or propfull [TAB] [TAB] to create their own version of the above.  It’s even more useful if you’re interested in creating something like a Dependency properties in WPF that is significantly more complicated, yet just as repetitive.  By typing propdp [TAB] [TAB], Visual Studio creates the following code for you, and just asks you to fill in a couple specifics:

public int MyProperty
{
 get { return (int)GetValue(MyPropertyProperty); }
 set { SetValue(MyPropertyProperty, value); }
}

// Using a DependencyProperty as the backing store for MyProperty. This enables animation, styling, binding, etc...
 public static readonly DependencyProperty MyPropertyProperty =
  DependencyProperty.Register("MyProperty", typeof(int), typeof(ownerclass), new PropertyMetadata(0));

These snippets, come standard in Visual Studio, and allow you to be significantly more efficient when it comes to creating this repetitive code.  However, for many developers, frustration arises when their repetitive code does not fall into an out of the box snippet.  Many developers will resort to either typing out all of the code by hand, or copy/pasting large portions of code and then modifying it for each instance.  Not only are both of these options monotonous and boring, but they are also both significantly more prone to errors than utilizing custom code snippets.

To illustrate this, lets pretend that I have a class, and I want to add a number of properties to it.  In most cases, the simple “propfull” snippet would suffice, however lets pretend that I want all of the properties in this class to be thread-safe.  This means, what I really want my properties to look like is the following:

private object _myPropertyLock = new object();
private string _myProperty;
public string MyProperty
{
 get
 {
  lock (_myPropertyLock)
  {
   return _myProperty;
  }
 }
 set
 {
  lock (_myPropertyLock)
  {
   _myProperty = value;
  }
 }
}

You can see how this template is very similar to what you get with the propfull snippet, but with the addition of a lock object and lock code. It would be a nightmare to create all of your properties with the propfull snippet and then go back to modify each of them individually.  In this case, you’re going to want to create your own snippet.  Let’s start by opening the Code Snippets Manager by navigating to Tools > Code Snippets Manager in Visual Studio.  In the Language Box, select C#, and navigate to the Visual C# folder below:

Code Snippets Manager

Navigate to the location provided and grab a copy of the propfull snippet file.  Next, in the Code Snippets Manager window, select the “My Code Snippets” folder, and navigate to that location in your Windows Explorer.  Paste your copy of the propfull file into your “My Code Snippets Folder” and rename it to proplock.  The name that you give this file should be the same as the shortcut you want to use in Visual Studio to trigger the snippet.  Open that file and you will find the following:

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
 <CodeSnippet Format="1.0.0">
  <Header>
   <Title>propfull</Title>
   <Shortcut>propfull</Shortcut>
   <Description>Code snippet for property and backing field</Description>
   <Author>Microsoft Corporation</Author>
   <SnippetTypes>
    <SnippetType>Expansion</SnippetType>
   </SnippetTypes>
  </Header>
  <Snippet>
   <Declarations>
    <Literal>
     <ID>type</ID>
     <ToolTip>Property type</ToolTip>
     <Default>int</Default>
    </Literal>
    <Literal>
     <ID>property</ID>
     <ToolTip>Property name</ToolTip>
     <Default>MyProperty</Default>
    </Literal>
    <Literal>
     <ID>field</ID>
     <ToolTip>The variable backing this property</ToolTip>
     <Default>myVar</Default>
    </Literal>
   </Declarations>
   <Code Language="csharp"><![CDATA[private $type$ $field$;

public $type$ $property$
 {
 get { return $field$;}
 set { $field$ = value;}
 }
 $end$]]>
   </Code>
  </Snippet>
 </CodeSnippet>
</CodeSnippets>

Lets Start by breaking down each of these sections and how they are used:

Header

The Header section is fairly self explanatory.  The Title and Shortcut are generally the same value.  The Shortcut is the key used in Visual Studio to trigger the use of the template.  In our case, we’ll change both of these to “proplock”, so that in Visual Studio we’ll type proplock [TAB] [TAB] to trigger our snippet.  The less obvious  parameter in the header section is “SnippetTypes”.  There are three possible values:

  • SurroundsWith: allows the code snippet to be placed around a selected piece of code.
  • Expansion: allows the code snippet to be inserted at the cursor.
  • Refactoring: specifies that the code snippet is used during Visual C# refactoring. Refactoring cannot be used in custom code snippets.

(Source: http://msdn.microsoft.com/en-us/library/ms171442.aspx)

Snippet

The Snippet section is where you define the snippet code and literals.

Declaration

The Declaration section contains the definitions for each of the string literals used in the code definition.  Essentially, you can define variables that you will fill in when you instantiate the snippet.  The snippet will then propagate that value as your code snippet dictates.  In the case of propfull, it defines three literals, type, property, and field.  In our case, we’ll reuse the three defined already.

Code

The Code section contains the definition of the snippet itself.  There are a couple things that are noteworthy here.  You’ll notice that “csharp” is the language defined.  Additionally, you will see the literals defined in the Declaration section used inside the snippet code, but surrounded with the ‘$’ symbol.  When you instantiate the snippet in Visual Studio, you will provide definitions for each of the literals.  Those values will then be placed in your code each place the snippet uses that literal.  It is important to recognize that the literals that you define are simply strings.  Thus, you can place characters before and after your literals in the code section.  In the case of “propfull”, you could forgo the field literal, and simple use the code:

<Code Language="csharp"><![CDATA[

private $type$ _$property$;
public $type$ $property$
{
 get { return _$property$;}
 set { _$property$ = value;}
}
 $end$]]>
</Code>

Note that the ‘_’ character can simply be prepended to our literal. It’s true that this violates normal code convention, but it illustrates the possibilities well.

proplock Snippet

Now, with just a couple simple modifications to the Code section, we can have the following:

<Code Language="csharp"><![CDATA[

private $type$ $field$;
private object $field$Lock = new object();
public $type$ $property$
{
 get
 {
  lock($field$Lock)
  {
   return $field$;
  }
 }
 set
 {
  lock($field$Lock)
  {
   $field$ = value;
  }
 }
}
 $end$]]>
</Code>

Summary

With these couple minor changes to the code snippet, we can now go back into Visual Studio and create self-locking properties by simply typing the following:

proplock [TAB][TAB] “typeName” [TAB] “fieldName” [TAB] “propertyName” [ENTER]

When you compare this simple sequence with all that is involved with creating this property by hand each time, you can really see the efficiency benefits.  This is a very simple example, but the same concepts can be applied to creating a wide range of snippets.

Comments Off , permalink


15

Oct 13

Why Hedge Fund Investors Need to Embrace Open Protocol



As a former hedge fund analyst, I’d been dreaming of standardized risk reporting from hedge funds since Pertrac introduced me to their P-Card five years ago. Among several benefits, standardized risk reporting enables hedge fund investors to capture significantly more data, enhance exposure accuracy and aggregation, improve productivity, and save money. Who wouldn’t love those benefits?

When I heard about Open Protocol for Enabling Risk Aggregation (“Open Protocol” or “OPERA”) and its purported industry backing (Albourne, Goldman Sachs, a few pensions, Och-Ziff, Citadel, DE Shaw), I figured my standardized risk reporting dreams were just about to come true. But I was wrong. Hedge fund investors have been glacially slow to adopt Open Protocol; Albourne seems to be the only hedge fund investor that is really pushing this initiative. And that’s unfortunate because there are some truly great benefits to an open source, standardized risk reporting platform. Here are my top three reasons why hedge fund investors need to join Albourne in the push toward standardized risk reporting.

Track thousands of hedge funds instantly

Hedge fund investors seek to dissect an expansive and disparate universe of data. Returns, AUM, and risk reporting are vital data points used in hedge fund analysis. Yet the vast majority of hedge fund investors are not able to capture this data on more than couple hundred funds; it’s too burdensome from a resource and cost perspective. Open Protocol solves this problem. It enables hedge fund investors to track all of these key data points across thousands of funds with little more than the push of a button – a fact that most investors seem to be missing. An incredible amount of data is out there to be analyzed and included in investment decisions. Open Protocol would make this possible for both large and small hedge fund investors.

Reduce risks

Open Protocol offers standardized hedge fund exposure reporting across various investment strategies.  For a hedge fund investor, this makes portfolio-level exposure aggregation more accurate. It eliminates inherent risks in the current exposure systems including the discretionary categorization of exposure by investments and risk teams (e.g. one analyst classifies an investment as high yield debt while another classifies the same investment as distressed debt), misinterpreting numbers from the managers’ reports, mistyping numbers from the managers’ reports, misunderstanding leverage calculations, out-of-date calculation assumptions, etc.

Save time and money

Open Protocol saves time for members of a hedge fund allocator’s investments team, providing a meaningful boost in productivity. Hedge fund investors large and small use teams of analysts to track the exposure of their existing investments. With Open Protocol, analysts would be freed up to spend their time analyzing investments as opposed to spending it entering and reviewing exposure. Along a similar line, there are hedge fund investors that pay a third party vendor hundreds of thousands of dollars to track their exposure so they can focus on investing. With Open Protocol, these investors could save themselves hundreds of thousands of dollars.

Open Protocol isn’t perfect. It doesn’t provide position-level data. Not everyone is going to agree with how Open Protocol categorizes certain investments. And one hedge fund could interpret a bucketing rule different than another. Still, it’s absolutely a step in the right direction.

It’s important to highlight that Open Protocol doesn’t belong to Albourne. Hedge funds don’t exclusively send their Open Protocol reports to Albourne. All hedge fund investors have equal, unencumbered access to these reports. Don’t let Albourne be the only hedge fund investor to benefit from Open Protocol. Start pushing your organization to move toward standardized risk reporting.

——————————————————-

Alex Agran is a consultant at Clarity Consulting (www.claritycon.com), a technology solutions firm based in Chicago. He spent six years at Grosvenor Capital Management, one of the world’s largest hedge fund investors, both analyzing hedge funds and building technology solutions. Alex would be happy to discuss how you can enable Open Protocol at your firm. He can be reached at aagran@claritycon.com or 312.863.3473.

Comments Off , permalink


10

Sep 13

Round Robin Load Balancing on the Cheap with Fiddler



Scaling out a web app from a single server deploy to a multi-server one is often not as straightforward as we’d like. It can be a real pain point to keep a multi-server environment configured and up to date, and for the developer trying to test their code, deploying to that environment is an extra hoop to jump through that can break a clean cycle of develop-debug. A lot of smart people have spent time figuring out good ways to mitigate this pain, producing strong guidance and acclaimed books. I am going to acknowledge and side-step that topic for now and instead pass along a “cheap” way I figured out to test how your web app behaves with round robin load balancing.

All load balancers support round robin style load balancing, which means equally distributing all incoming web requests across the available servers. All load balancers also tend to be tricky to configure and maintain, which can lead to more of the headaches described above. Enter Fiddler to help us solve this problem on the cheap and let a developer build and debug a web application locally with round robin load balancing. Fiddler is an amazing tool for inspecting and modifying HTTP traffic. I use it all the time for debugging websites and webservices. If you do a lot of web development and haven’t checked it out, do yourself a favor and check it out. Fiddler also has a .NET plugin model, which lets you execute arbitrary .NET code when any web request goes by and Fiddler is running.

I was trying to figure out exactly how SignalR behaved when clients were using long polling and their polls were bouncing across servers (round robin load balancing). After fighting with a load balancer for a while, I thought of Fiddler and implemented a simple round robin plugin in C#. For a Fiddler plugin, you implement an interface from the Fiddler.exe, IAutoTamper, add an attribute to the AssemblyInfo and away you go. With the disclaimer that this is clearly not a real load balancer and the code “works on my machine”, I thought I’d share how little code I had to write. Here I’ll just show some class variables and the AutoTamperRequestBefore override, which executes before a request proxies through Fiddler: https://gist.github.com/phmiller/6518350

As you can see from the code, the plugin is looking for a particular host:port combination to redirect. This can be whatever you want to make up, in this case a local IP address and port. When the plugin sees a request to that address it redirects the request to a first or second destination address, rotating between destinations with each new request, i.e. a round robin. These destination addresses should be instances of your web app. Not pretty, but it worked like a charm for me. I hope this saves some of you a few headaches and maybe inspires you to explore Fiddler and its plugin model.

 

Link Soup

 

1,957 comments , permalink


7

Aug 13

Thinking Functionally in C# with monads.net



Functional concepts have worked their way into C# most notably through linq and lambda expressions, but the benefits of functional concepts in C# don’t have to stop there. This post will exemplify how functional concepts can be used to manage boilerplate code, such as if/else statements, null checks, and logging. The examples below are written using extension methods from the GitHub project, monads.net. monads.net isn’t required or even endorsed for working more functional goodness into your C# code, but it’s a great tool to start learning and sparking ideas with.

Before we dive into the code pit, please read my painless, oversimplified explanation of what monads are:
Monads are utility functions that accept and conditionally pass the result of functionX() to functionY().

Let’s start by reaching into a deeply nested object structure while doing null checks along the way.

C#

		Building building = SomeQueryThatShouldReturnABuilding();
		string phoneNumber = null;

		if(building != null)
		{
			if(building.Manager != null)
			{
				if(building.Manager.ContactInfo != null)
				{
					if(building.Manager.ContactInfo.PhoneNumber != null)
					{
						phoneNumber = building.Manager.ContactInfo.PhoneNumber;
					}
				}
			}
		}
	

C# with monads.net

	string phoneNumber = SomeQueryThatShouldReturnABuilding()
	.With(b=>b.Manager)
	.With(m=>m.ContactInfo)
	.With(c=>c.PhoneNumber);

Explanation:

The With() extension method will only evaluate the expression it is passed if the value it is executing off of is not null, otherwise it returns null. This means that a null result anywhere in the chain will cause the rest of the With() calls to not evaluate their expression, and instead pass the null safely to the end of the chain.

What With() looks like under the covers

public static TResult With<TSource, TResult>(this TSource source, Func<TSource, TResult> action)
	where TSource : class
{
	if (source != default(TSource))
	{
		return action(source);
	}
	else
	{
	        return default(TResult);
	}
}


What if we don’t care about storing the phone number in a variable. What if instead we just want to dial the number if it’s found.


C#

	Building building = SomeQueryThatShouldReturnABuilding();

	if(building != null)
	{
		if(building.Manager != null)
		{
			if(building.Manager.ContactInfo != null)
			{
				if(building.Manager.ContactInfo.PhoneNumber != null)
				{
					Dial(building.Manager.ContactInfo.PhoneNumber)
				}
			}
		}
	}

C# with monads.net

	SomeQueryThatShouldReturnABuilding()
	.With(b=>b.Manager)
	.With(m=>m.ContactInfo)
	.With(c=>c.PhoneNumber)
	.Do((p)=>{ Dial(p);});

Explanation:

The Do method is similar to the With Method. If the current value of the chain is null, the expression will not be evaluated and a null will be returned instead of the result of the expression.

What Do() looks like under the covers

public static TSource Do<TSource>(this TSource source, Action<TSource> action)
	where TSource : class
{
	if (source != default(TSource))
	{
		action(source);
	}
	return source;
}

And what if we only want to dial the phone number if the manager is at work?

C#

	var Building = SomeQueryThatShouldReturnABuilding();

	if(building != null)
	{
		if(building.Manager != null && building.Manager.isAtWork)
		{
			if(building.Manager.ContactInfo != null)
			{
				if(building.Manager.ContactInfo.PhoneNumber != null)
				{
					Dial(building.Manager.ContactInfo.PhoneNumber)
				}
			}
		}
	}

C# with monads.net

	SomeQueryThatShouldReturnABuilding()
	.With(b=>b.Manager)
	.If(m=>m.isAtWork)
	    .With(m=>m.ContactInfo)
	    .With(c=>c.PhoneNumber)
	    .Do((p)=>{ Dial(p);});

Explanation:

The If method will return a null if the current value is null or the boolean expression it is passed evaluates to false. This will cause the rest of the chain to be aborted.

What If() looks like under the covers

public static TSource If<TSource>(this TSource source, Func<TSource, bool> condition)
	where TSource : class
{
	if ((source != default(TSource)) && (condition(source) == true))
	{
		return source;
	}
	else
	{
		return default(TSource);
	}
}

Cool. If we find a manager and the manager is at work and we have the managers telephone number, we’re dialing it. But what if we want to log to a server or file along the way? An elegant way to handle this is to create our own logging “Monad,” which is another chainable extension method that can be used anywhere in our logic chain.

Our Log() looks like this. It will log the current value and a message if the current value isn’t null. This method will always return the value passed into it even if that value is null. This allows the chaining to continue similar to With(), Do(), and If().

	public static T Log(this T currentValue, string message) 
        where T : class
    {
        if (currentValue != default(T))
            Console.WriteLine(currentValue + message);

        return currentValue;
    }

C#

	Building building = SomeQueryThatShouldReturnABuilding();

	if(building != null)
	{
		if(building.Manager != null)
		{
			Console.WriteLine("Found Manager");
			if(building.Manager.isAtWork)
			{
				Console.WriteLine("Manager is at work");
				if(building.Manager.ContactInfo != null)
				{
					if(building.Manager.ContactInfo.PhoneNumber != null)
					{
						Console.WriteLine("Dialing The Manager");
						Dial(building.Manager.ContactInfo.PhoneNumber)
					}
				}
			}
		}
	}

C# with monads.net

	  SomeQueryThatShouldReturnABuilding()
	 .With(b=>b.Manager)
	 .Log("Found Manager")
	 .If(m=>m.isAtWork)
	     .Log("Manager is at work")
	     .With(m=>m.ContactInfo)
	     .With(c=>c.PhoneNumber)
	     .Log("Dialing The Manager")
	     .Do((p)=>{ Dial(p);});

Explanation:

Look at that, we made a logging monad that behaves the same way as .With(), .Do(), and .If().

Conclusion

Your day to day work life may never involve functional languages like Haskell or F#, but the mind opening concepts they tout can lead to wonderful coding patterns – And that knowledge is useful in any language.

If you want to learn more you should check out this excellent post over at adit.io

642 comments , permalink


23

Jul 13

Clarity Clue Tech Challenge



Clarity Clue

Every once in a while, Clarity holds a Tech Challenge wherein employees code entries that compete against each other in games such as Risk, Hearts, or Battleship. Recently, Brad Pederson and I hosted a Tech Challenge where we created a engine in WPF for the game Clue. Fifteen other Clarity employees created their own entries derived from a base class which we setup to participate in a tournament.

Differences from Normal Clue

We changed some of the basic rules in order to add more difficulty to the game. This also meant that people simply couldn’t copy another solution offline and submit it as their own work since there were fabulous prizes on the line. The main changes to the game were as follows:

Motives
In addition to suspects, weapons, and rooms, motives were added as part of the solution. This increased the overall number of answer possibilities making it much more difficult to randomly guess the answer. There were six motives as to why a suspect may have committed the murder:

  • Accident
  • Inheritance
  • Jealousy
  • Passion
  • Self Defense
  • Serial Killer

Community Cards
Once three cards were dealt to each player, the remaining five cards were kept in the middle as Community Cards. If a suggestion/accusation could not be disproven by anyone, the Community Cards were checked. If one of the Community Cards could disprove the suggestion/accusation, it was flipped over and all players were notified which card it was.

Turn Limit
The game was limited to 40 turns in case we ran into a situation where no one was going to win (players eliminated part of the correct answer, got stuck moving, etc.).

Scoring

Every Player class kept track of a Best Guess – their best guess at the Person, Weapon, Room, and Motive that was in the solution – that was used to get points at the end of the game. Scoring was based on how many of the parts of the solution (Person, Weapon, Room, Motive) the player had correct:

  • Game Winner – 10 points
  • 4/4 correct but not a winner – 7 points
  • 3/4 correct – 5 points
  • 2/4 correct – 3 points
  • 1/4 correct – 1 point
  • 0/4 or failed accusation – 0 points

Clarity Clue

The Code

Every entry had to implement a base Player class that Brad an I defined. Here is the class definition (a more detailed breakdown can be found in the source code and documentation):

        // Which person you are (Miss Scarlet, Colonel Mustard, etc.)
        public PersonType PersonType { get; set; }
        // The cards you start out with
        public List<Card> Cards { get; set; }
        // Your team name - displayed in the view
        public string TeamName { get; set; }

        // These lists are tied to the Player display
        // When items are eliminated from these lists, they get an 'X' in the view
        public List<PersonType> PossiblePeople;
        public List<WeaponType> PossibleWeapons;
        public List<RoomType> PossibleRooms;
        public List<MotiveType> PossibleMotives;

        // This is your best guess for the answer.
        public Answer BestGuess { get; set; }

        // Method called as soon as all players are initialized. Do any setup logic in here that you deem necessary.
        public abstract void Initialize(PersonType personType, List<Card> cards, List<PersonType> playerOrder);

        // Method called every time it is your turn
        // You are expected to respond with a moves list and a guess at the correct answer
        public abstract TurnResult ExecuteTurn(TurnState state, int dieRoll);

        // Method called when your Suggestion/Accusation has been disproved by someone else showing you a card
        public abstract void MyAccusationDisproved(PersonType whoDisproved, Card card, List<PersonType> whoDidNotDisprove);

        // Method called when someone else's suggestion/accusation has been disproved
        public abstract void AccusationDisproved(PersonType whoAccused, PersonType whoDisproved, Answer accusation, List<PersonType> whoDidNotDisprove, bool isSuggestion);

        // The game engine has identified you as being able to show a card to prove the suggestion/accusation wrong.
        public abstract Card ChooseWhichCardToReveal(List<Card> possibleCardsToReveal, PersonType whoAccused, Answer accusation, List<PersonType> whoDidNotDisprove);

        // There are 5 community cards that no one owns. If no player can disprove the accusation, the community cards are checked. Once one is revealed, all players are notified.
        public abstract void CommunityCardRevealed(PersonType whoAccused, Answer accusation, Card card);

Results

In Round 1, every entry got to play two matches of 25 games a piece. The scores from the 50 games each entry played were added up and the top six entries moved onto the finals. The finals consisted of one batch of 100 games with the results as follows:

  1. EDemondslayer (Drew Randall and Gary Gilmer)
  2. Holmes (Nathan Gonzalez)
  3. Moron (Evan DeMond)
  4. SaucyWombat (Ben Zeinz)
  5. Clueless in Chicago(Shannon Geraci)
  6. Sean Cleary (Sean Cleary)

A honorary mention goes out to Lee Roth for tying for 6th place in Round 1 but missing out of the finals due to the arbitrary tie breaker of who submitted their code first.

Source

The source code contains all 15 entries, sample player, game engine, and documentation. Running the application as-is will run the final 6 winning entries. The speed of the game and the batch size can be adjusted through the UI to suit.

Clarity.Clue.zip

Comments Off , permalink


26

Mar 13

Design for Software the book – available now!



Today is a big day, I’m pleased to announce that “Design for Software: A Playbook for Developers” is finally available to the public! It was a long journey taking nearly 14 months to complete, but it’s here and I couldn’t be more excited. Check it out at designforsoftware.com

A little background

Over the past ten years I’ve worked closely with designers and developers in the emerging technology industry. During that time I’ve realized that designing software isn’t like any other type of design. Designing software is unique. It combines many fields of design into a single medium – Motion, typography, color theory, interaction design, information architecture, ethnography, and more. And for many reasons it deserves its own design process.

That is the premiss for this book – a process for designing software. My goal is to make this unique discipline more accessible for those that don’t have expertise in UI/UX design. Throughout the book I attempt to break down design theory concepts and present them in a way that makes sense for application design.

This book is written for non-designers and tech-savvy artists looking to create best in class software. I believe anyone can design great software, regardless of background. It doesn’t matter if you have a design degree (although it might help) or a history of being a hard core developer. Many design principles can be broken down into frameworks that help take the guesswork out of an otherwise abstract field.

By the end of the book you’ll have effectively learned all my design “secrets”. I hope that the design process I’ve written about is as useful to you as it has been for me.

Book Extras

It’s pretty difficult to write about interactive design in a static medium..go figure. During the writing process there were many times that static images didn’t do the content justice, especially when trying to describe motion. So I created a bunch of complementary web content.

You don’t have to buy the book to reap the benefits of the extras, but you should. They will make more sense…and it will make my dog happy (not my actual dog).

Where can I buy it?

The publisher (Wiley) has done an amazing job making the book available from a variety of retailers and in a ton formats. Of course I would encourage you to pick up the physical book, but the digital versions are good too. As of now, you can purchase the book from the following retailers:

I’m looking forward to seeing it out in the wild!
Best,
Erik

12 comments , permalink