Author: Sabarinathan A

About Sabarinathan A

I, Sabarinathan Arthanari have over 20 years of experience in Computer science and specialise in Microsoft architecture for enterprise applications.

Distributed – SOA – AKKA




Akka is a event driven message system for asynchronous logic flow within a distributed application, i.e. logic oriented.

Kafka is a log streaming system for asynchronous data flow within a distributed application, i.e. data oriented.


Testing basics and IN A RIGHT WAY for agile team

Type of testing

  • Unit Testing- To focus on one unit of code, usually a class or a method
  • Integration Testing-Integration Testing is when a program accesses an external resource to confirm that the code is functioning properly (i.e. database calls or loading a file). If you are making database calls in your unit tests, then they aren’t called unit tests…they are integration tests.
  • Regression Testing- Regression testing is finding defects after a major code change has occurred. Those old unit tests may need refactored to match the new code. Of course, if it is new code, I could almost guarantee that at least 25-50% of your unit test code would fail (considering it’s a major change to your code base).
  • Load Testing-load testing is primarily concerned with how well the system runs under a specific load of users or large amounts of data.

Here are three of the most common types of automated tests:

  1. · Unit tests: A single piece of code (usually an object or a function) is tested, isolated from other pieces
  2. · Integration tests: Multiple pieces are tested together, for example testing database access code against a test database
  3. · Acceptance tests (also called Functional tests): Automatic testing of the entire application, for example using a tool like Selenium to automatically run a browser.


  • Often new programmers don’t understand testing. They don’t think it necessary and most developers have no clue about how testing is actually done
  • Testing, at its core, is really about reducing risk.
  • The goal of testing software is not to find bugs or to make software better. It’s to reduce the risk by proactively finding and helping to eliminate problems which would most greatly impact the customer using the software.
  • you can never find all the bugs or defects in a piece of software and you can never test every possible input into the software. (For any non-trivial application.)
  • So, the idea is not to find every single possible thing that is wrong, or even to verify the software against a spec—as some people like to define software testing—because both are impossible.
  • Impact can happen with the frequency of an error or undesired functionality, or it can be because of the severity of the problem.
  • Typically this is achieved by first prioritizing what areas of the software are likely to have the biggest impact (i.e. risk), and then deciding on a set of tests to run which verify the desired functionality.
  • When the actual functionality deviates from the desired functionality, a defect is usually logged and those defects are prioritized based on severity.

Black box testing

Black box testing is simply testing as if the software itself was a black box.

When you do black box testing, you are only concerned with inputs and outputs. You don’t care how the actual outputs are derived. You don’t know anything about the code or how it works.

White box testing

With white box testing, you have at least some idea of what is going on inside the software.  Unit testing is not testing at all

Instead real white box testing is when you understand some of the internals of the system and perhaps have access to the actual source code, which you use to inform your testing and what you target.


Testing in a right way

WHAT do they test? In my experience, it’s all about avoiding bugs: validation of input data, successful registration process, whether all links work or not, etc. In other words, if it doesn’t cause a bug or exception, we’re good to go.

However, that’s not enough. It’s too narrow. You need to take things a step further into the business world.

In addition to making something work, developers also need to test whether that function is actually being used. For that, you usually need to set up some kind of analytics or event tracking system to monitor the usage after the launch. Be prepared to change the functionality or even remove it. After all, that extra code is pointless if your client doesn’t employ it, so it’s wise not to get attached.

Also, while testing before the launch, look for the logical errors.

  • What doesn’t make sense or is hard to use?
  • What could potentially cause users to not use the function?
  • Too many fields to fill?
  • A too clunky menu navigation?
  • Some buttons missing?

Of course, to be able to think in that direction, you need to do extra homework on the business context. Dig a little deeper into the product and its goals, have a few conversations asking client “why,” and possibly even read some articles or books on the subject.

Quite often, it seems that developers see business people as their enemies. Project managers, clients, marketing department, sales people—even if they all work in the same company—there’s a silent war going on.

Usually you can feel it by hearing phrases like:

  • “…they’re stupid and constantly changing demands…”
  • “…meeting again? They don’t let me do my job!
  • “…they think it’s a five-minute change, they have no idea…”
  • “…I don’t know how long it will take. It’s done as soon as it’s done…”

In reality, the development team and the marketing team should act as one team with a common goal. They should help each other by discussing situations instead of blaming each other.

In my experience, if the project grows to the point of dealing with optimization issues, by that point the client will have a lot of clients, their feedback, analytics numbers, and usually a new vision for the next step of growth. Quite often, that means re-creating the project from scratch, not because of those performance issues, but because of business reasons—different functionality, pivoted pricing model, new design, etc

Implement important design patterns in a single solution


Recently I had been provided with the interesting problem “Cube list to be implemented with the double linked list”.

I began with solving the problem with the straightforward algorithm. But started refactoring the same with OOP principles and design patterns.

The solution is implemented with C# Core 2.0. You can find the source code at Google Drive.

Phase 1: Use most important design patterns

The most important design principles and patterns, I applied in the first phase of refactoring are as below.

  1. Dependency Inversion principle is implemented with necessary interfaces.
  2. Factory pattern
  3. Service Locator pattern is implemented using inbuilt .NET core dependency injection framework.
  4. Iterator pattern is implemented using IEnumerable and yield.

Basic Idea

  1. Cube is implemented with both singly linked list and doubly linked list.
  2. Both the list classes implement common interface ICubeNode. The Cube list class is not aware of (doesn’t care) what type of list is used. (Dependency Inversion principle)
  3. The type of list to be used to build the cube can be selected by the user at run-time.
  4. The selection is passed as a parameter to the Factory class. (dependency injection)
  5. The Factory will generate the required node objects (service locator pattern)
  6. Each and every element of the cube can be extracted using Iterator pattern.


Note: Thanks to Armen Shimoon for sharing the idea to invoke the factory pattern within the .NET Core dependency injection container.

Asynchronous programming & Synchronization in Multi-Threading

What is the difference between a process and a thread?

– An executing instance of a program is called a process.
– Some operating systems use the term ‘task‘ to refer to a program that is being executed.
– A process is always stored in the main memory also termed as the primary memory or random access memory.
– Therefore, a process is termed as an active entity. It disappears if the machine is rebooted.
– Several process may be associated with a same program.
– On a multiprocessor system, multiple processes can be executed in parallel.
– On a uni-processor system, though true parallelism is not achieved, a process scheduling algorithm is applied and the processor is scheduled to execute each process one at a time yielding an illusion of concurrency.

– A thread is a subset of the process.
– It is termed as a ‘lightweight process’, since it is similar to a real process but executes within the context of a process and shares the same resources allotted to the process by the kernel.
– Usually, a process has only one thread of control – one set of machine instructions executing at a time.
– A process may also be made up of multiple threads of execution that execute instructions concurrently.
– Multiple threads of control can exploit the true parallelism possible on multiprocessor systems.
– On a uni-processor system, a thread scheduling algorithm is applied and the processor is scheduled to run each thread one at a time.
– All the threads running within a process share the same address space, file descriptors, stack and other process related attributes.
– Since the threads of a process share the same memory, synchronizing the access to the shared data withing the process gains unprecedented importance.

– Source: Knowledge Quest

In Synchronus mode, every task executes in sequence, so it’s easier to program. That’s the way we’ve been doing it for years.

With asynchronous execution, you have few challenges:
– You must synchronize tasks. for e.g. you run a task that must be executed after the other three have finished. You will have to create a mechanism to wait for all tasks to finish before launching the new task.
– You must address concurrency issues. If you have a shared resource, like a list that is written in one task and read in another, make sure that it’s kept in a known state.
– There is no logical sequence anymore. The tasks can end at any time, and you don’t have control of which one finishes first.

But in synchronous programming we have below disadvantages:
– It takes longer to finish.
– It may stop the user interface (UI) thread. Typically, these programs have only one UI thread, and when you use it as a blocking operation, you get the spinning wheel (and “not responding” in the caption title) in your program—not the best experience for your users.
It doesn’t use the multicore architecture of the new processors. Regardless of whether your program is running on a 1-core or a 64-core processor, – it will run as quickly (or slowly) on both.
Asynchronous programming eliminates these disadvantages: it won’t hang the UI thread (because it can run as a background task), and it can use all the cores in your machine and make better use of machine resources. So, do you choose easier programming or better use of resources? Fortunately, you don’t have to make this decision. Microsoft has created several ways to minimize the difficulties of programming for asynchronous execution.

Should a return statement be inside or outside a lock statement?

It is recommended to put the return inside the lock. Otherwise you risk another thread entering the lock and modifying your variable before the return statement, therefore making the original caller receive a different value than expected.

What is the use of volatile keyword?

The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. Fields that are declared volatile are not subject to compiler optimizations that assume access by a single thread. This ensures that the most up-to-date value is present in the field at all times.
The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access.
The volatile keyword can be applied to fields of these types:
– Reference types.
– Pointer types (in an unsafe context). Note that although the pointer itself can be volatile, the object that it points to cannot. In other words, you cannot declare a “pointer to volatile.”
– Integral types such as sbyte, byte, short, ushort, int, uint, char, float, and bool.
– An enum type with an integral base type.
– Generic type parameters known to be reference types.
– IntPtr and UIntPtr.

Task-based Asynchronous Pattern (TAP)

Reasons for a quick return include the following:

  • Asynchronous methods may be invoked from user interface (UI) threads, and any long-running synchronous work could harm the responsiveness of the application.
  • Multiple asynchronous methods may be launched concurrently. Therefore, any long-running work in the synchronous portion of an asynchronous method could delay the initiation of other asynchronous operations, thereby decreasing the benefits of concurrency.

Will your code be “waiting” for something, such as data from a database Extracting Data from a Network? If your answer is “yes”, then your work is I/O-bound.

Will your code be performing a very expensive computation? If you answered “yes”, then your work is CPU-bound.

  • For I/O-bound code, you await an operation which returns a Task or Task<T> inside of an async method.
  • For CPU-bound code, you await an operation which is started on a background thread with the Task.Run method.

I/O-Bound Example: Downloading data from a web service

private readonly HttpClient _httpClient = new HttpClient();

downloadButton.Clicked += async (o, e) => { // This line will yield control to the UI as the request

// from the web service is happening.

// // The UI thread is now free to perform other work.

var stringData = await _httpClient.GetStringAsync(URL); DoSomethingWithData(stringData);



CPU-bound Example: Performing a Calculation for a Game

private DamageResult CalculateDamageDone() {

// Code omitted: // // Does an expensive calculation and returns // the result of that calculation.


calculateButton.Clicked += async (o, e) => {

// This line will yield control to the UI while CalculateDamageDone()

// performs its work. The UI thread is free to perform other work.

var damageResult = await Task.Run(() => CalculateDamageDone()); DisplayDamage(damageResult);



  • async methods need to have an await keyword in their body or they will never yield!
  • You should add “Async” as the suffix of every async method name you write.


  • async void should only be used for event handlers.
  • Exceptions thrown in an async void method can’t be caught outside of that method.
  • async void methods are very difficult to test.
  • async void methods can cause bad side effects if the caller isn’t expecting them to be async.


  • Tread carefully when using async lambdas in LINQ expressions
  • Write code that awaits Tasks in a non-blocking manner The Task API contains two Methods,  Task.WhenAll and Task.WhenAnywhich allow you to write asynchronous code which performs a non-blocking wait on multiple background jobs.
Use this… Instead of this… When wishing to do this
await Task.Wait or Task.Result Retrieving the result of a background task
await Task.WhenAny Task.WaitAny Waiting for any task to complete
await Task.WhenAll Task.WaitAll Waiting for all tasks to complete
await Task.Delay Thread.Sleep Waiting for a period of time
  • Write less stateful code

Don’t depend on the state of global objects or the execution of certain methods. Instead, depend only on the return values of methods. Why?

  • Code will be easier to reason about.
  • Code will be easier to test.
  • Mixing async and synchronous code is far simpler.
  • Race conditions can typically be avoided altogether.
  • Depending on return values makes coordinating async code simple.
  • (Bonus) it works really well with dependency injection.

Immutable Objects

An immutable object is one whose state cannot be altered — externally or internally. The fields in an immutable object are typically declared read-only and are fully initialized during construction.

Immutability is a hallmark of functional programming — where instead of mutating an object, you create a new object with different properties. LINQ follows this paradigm. Immutability is also valuable in multithreading in that it avoids the problem of shared writable state — by eliminating (or minimizing) the writable.

One pattern is to use immutable objects to encapsulate a group of related fields, to minimize lock durations.

class ProgressStatus    // Represents progress of some activity{

public readonly int PercentComplete;

public readonly string StatusMessage;


// This class might have many more fields…


public ProgressStatus (int percentComplete, string statusMessage)


PercentComplete = percentComplete;

StatusMessage = statusMessage;


Nonblocking Synchronization



Writing nonblocking or lock-free multithreaded code properly is tricky! Memory barriers, in particular, are easy to get wrong (the volatile keyword is even easier to get wrong). Think carefully whether you really need the performance benefits before dismissing ordinary locks.

Memory Barriers and Volatility

Consider the following example:

class Foo{

int _answer;

bool _complete;


void A()


_answer = 123;

_complete = true;



void B()


if (_complete) Console.WriteLine (_answer);


If methods A and B ran concurrently on different threads, might it be possible for B to write “0”? The answer is yes — for the following reasons:

  • The compiler, CLR, or CPU may reorderyour program’s instructions to improve efficiency.
  • The compiler, CLR, or CPU may introduce caching optimizations such that assignments to variables won’t be visible to other threads right away.

C# and the runtime are very careful to ensure that such optimizations don’t break ordinary single-threaded code — or multithreaded code that makes proper use of locks. Outside of these scenarios, you must explicitly defeat these optimizations by creating memory barriers (also called memory fences) to limit the effects of instruction reordering and read/write caching.

static void Main(){

bool complete = false;

var t = new Thread (() =>


bool toggle = false;

while (!complete) toggle = !toggle;



Thread.Sleep (1000);

complete = true;

t.Join();        // Blocks indefinitely}

This program never terminates because the complete variable is cached in a CPU register. Inserting a call to Thread.MemoryBarrier inside the while loop (or locking around reading complete) fixes the error.



The volatile keyword

Another (more advanced) way to solve this problem is to apply the volatile keyword to the _complete field:

volatile bool _complete;

The volatile keyword instructs the compiler to generate an acquire-fence on every read from that field, and a release-fence on every write to that field. An acquire-fence prevents other reads/writes from being moved before the fence; a release-fence prevents other reads/writes from being moved after the fence. These “half-fences” are faster than full fences because they give the runtime and hardware more scope for optimization.

As it happens, Intel’s X86 and X64 processors always apply acquire-fences to reads and release-fences to writes — whether or not you use the volatile keyword — so this keyword has no effect on the hardware if you’re using these processors. However, volatile does have an effect on optimizations performed by the compiler and the CLR — as well as on 64-bit AMD and (to a greater extent) Itanium processors. This means that you cannot be more relaxed by virtue of your clients running a particular type of CPU.

he effect of applying volatile to fields can be summarized as follows:

First instruction Second instruction Can they be swapped?
Read Read No
Read Write No
Write Write No (The CLR ensures that write-write operations are never swapped, even without the volatile keyword)
Write Read Yes!

Notice that applying volatile doesn’t prevent a write followed by a read from being swapped, and this can create brainteasers.

This presents a strong case for avoiding volatile



Use of memory barriers is not always enough when reading or writing fields in lock-free code. Operations on 64-bit fields, increments, and decrements require the heavier approach of using the Interlocked helper class. Interlocked also provides the Exchange and CompareExchange methods, the latter enabling lock-free read-modify-write operations, with a little additional coding.

A statement is intrinsically atomic if it executes as a single indivisible instruction on the underlying processor. Strict atomicity precludes any possibility of preemption. A simple read or write on a field of 32 bits or less is always atomic. Operations on 64-bit fields are guaranteed to be atomic only in a 64-bit runtime environment, and statements that combine more than one read/write operation are never atomic

class Atomicity{

static int _x, _y;

static long _z;


static void Test()


long myLocal;

_x = 3;             // Atomic

_z = 3;             // Nonatomic on 32-bit environs (_z is 64 bits)

myLocal = _z;       // Nonatomic on 32-bit environs (_z is 64 bits)

_y += _x;           // Nonatomic (read AND write operation)

_x++;               // Nonatomic (read AND write operation)


class Program{

static long _sum;


static void Main()

{                                                             // _sum

// Simple increment/decrement operations:

Interlocked.Increment (ref _sum);                              // 1

Interlocked.Decrement (ref _sum);                              // 0


// Add/subtract a value:

Interlocked.Add (ref _sum, 3);                                 // 3


// Read a 64-bit field:

Console.WriteLine (Interlocked.Read (ref _sum));               // 3


// Write a 64-bit field while reading previous value:

// (This prints “3” while updating _sum to 10)

Console.WriteLine (Interlocked.Exchange (ref _sum, 10));       // 10


// Update a field only if it matches a certain value (10):

Console.WriteLine (Interlocked.CompareExchange (ref _sum,

123, 10);      // 123



Interlocked’s mathematical operations are restricted to Increment, Decrement, and Add. If you want to multiply — or perform any other calculation — you can do so in lock-free style by using the CompareExchangemethod (typically in conjunction with spin-waiting). We give an example in the parallel programming section.

Interlocked works by making its need for atomicity known to the operating system and virtual machine.

Interlocked’s methods have a typical overhead of 10 ns — half that of an uncontended lock. Further, they can never suffer the additional cost of context switching due to blocking. The flip side is that using Interlocked within a loop with many iterations can be less efficient than obtaining a single lock around the loop (although Interlocked enables greater concurrency).


When to Lock

  • In .NET Framework, nearly all of its nonprimitive types are not thread-safe (for anything more than read-only access) when instantiated, and yet they can be used in multithreaded code if all access to any given object is protected via a lock.
  • Wrapping access to an object around a custom lock works only if all concurrent threads are aware of — and use — the lock. This may not be the case if the object is widely scoped. The worst case is with static members in a public type.
  • Thread safety in static methods is something that you must explicitly code: it doesn’t happen automatically by virtue of the method being static!
  • Making types thread-safe for concurrent read-only access (where possible) is advantageous because it means that consumers can avoid excessive locking. Many of the .NET Framework types follow this principle: collections, for instance, are thread-safe for concurrent readers.




Exclusive locking is used to ensure that only one thread can enter particular sections of code at a time. The two main exclusive locking constructs are lock and Mutex. Of the two, the lock construct is faster and more convenient. Mutex, though, has a niche in that its lock can span applications in different processes on the computer.


A Comparison of Locking Constructs


Construct Purpose Cross-process? Overhead*
lock (Monitor.Enter / Monitor.Exit) Ensures just one thread can access a resource, or section of code at a time 20ns
Mutex Yes 1000ns
SemaphoreSlim (introduced in Framework 4.0) Ensures not more than a specified number of concurrent threads can access a resource, or section of code 200ns
Semaphore Yes 1000ns
ReaderWriterLockSlim(introduced in Framework 3.5) Allows multiple readers to coexist with a single writer 40ns
ReaderWriterLock(effectively deprecated) 100ns


*Time taken to lock and unlock the construct once on the same thread (assuming no blocking), as measured on an Intel Core i7 860.





As a basic rule, you need to lock around accessing any writable shared field. Even in the simplest case — an assignment operation on a single field — you must consider synchronization.


class ThreadSafe{


static readonly object _locker = new object();


static int _x;




static void Increment() { lock (_locker) _x++; }


static void Assign()    { lock (_locker) _x = 123; }}


In Nonblocking Synchronization, we explain how this need arises, and how the memory barriers and the Interlocked class can provide alternatives to locking in these situations.


Nested Locking


A thread can repeatedly lock the same object in a nested (reentrant) fashion:


lock (locker)


lock (locker)


lock (locker)




// Do something…




In these scenarios, the object is unlocked only when the outermost lock statement has exited — or a matching number of Monitor.Exit statements have executed.


Nested locking is useful when one method calls another within a lock




A deadlock happens when two threads each wait for a resource held by the other, so neither can proceed. The easiest way to illustrate this is with two locks:


object locker1 = new object();object locker2 = new object();


new Thread (() => {


lock (locker1)




Thread.Sleep (1000);


lock (locker2);      // Deadlock




}).Start();lock (locker2){


Thread.Sleep (1000);


lock (locker1);                         

// Deadlock



The popular advice, “lock objects in a consistent order to avoid deadlocks,” although helpful in our initial example, is hard to apply to the scenario just described. A better strategy is to be wary of locking around calling methods in objects that may have references back to your own object. Also, consider whether you really need to lock around calling methods in other classes (often you do — as we’ll see later — but sometimes there are other options). Relying more on declarative and data parallelism, immutable types, and nonblocking synchronization constructs, can lessen the need for locking.


Here is an alternative way to perceive the problem: when you call out to other code while holding a lock, the encapsulation of that lock subtly leaks. This is not a fault in the CLR or .NET Framework, but a fundamental limitation of locking in general. The problems of locking are being addressed in various research projects, including Software Transactional Memory.




A Mutex is like a C# lock, but it can work across multiple processes. In other words, Mutex can be computer-wide as well as application-wide.


Acquiring and releasing an uncontended Mutex takes a few microseconds — about 50 times slower than a lock.


With a Mutex class, you call the WaitOne method to lock and ReleaseMutex to unlock. Closing or disposing a Mutex automatically releases it. Just as with the lock statement, a Mutex can be released only from the same thread that obtained it.


A common use for a cross-process Mutex is to ensure that only one instance of a program can run at a time. Here’s how it’s done:


class OneAtATimePlease{


static void Main()




// Naming a Mutex makes it available computer-wide. Use a name that’s


// unique to your company and application (e.g., include your URL).




using (var mutex = new Mutex (false, “ OneAtATimeDemo”))




// Wait a few seconds if contended, in case another instance


// of the program is still in the process of shutting down.




if (!mutex.WaitOne (TimeSpan.FromSeconds (3), false))




Console.WriteLine (“Another app instance is running. Bye!”);














static void RunProgram()




Console.WriteLine (“Running. Press Enter to exit”);






If running under Terminal Services, a computer-wide Mutex is ordinarily visible only to applications in the same terminal server session. To make it visible to all terminal server sessions, prefix its name with Global\.




A semaphore with a capacity of one is similar to a Mutex or lock, except that the semaphore has no “owner” — it’s thread-agnostic. Any thread can call Release on a Semaphore, whereas with Mutex and lock, only the thread that obtained the lock can release it.


There are two functionally similar versions of this class: Semaphore and SemaphoreSlim. The latter was introduced in Framework 4.0 and has been optimized to meet the low-latency demands of parallel programming. It’s also useful in traditional multithreading because it lets you specify a cancellation token when waiting. It cannot, however, be used for interprocess signaling.


While performing intensive disk I/O, the Semaphore would improve overall performance by limiting excessive concurrent hard-drive activity.


Semaphores can be useful in limiting concurrency — preventing too many threads from executing a particular piece of code at once. In the following example, five threads try to enter a nightclub that allows only three threads in at once:


class TheClub      // No door lists!{


static SemaphoreSlim _sem = new SemaphoreSlim (3);    // Capacity of 3




static void Main()




for (int i = 1; i <= 5; i++) new Thread (Enter).Start (i);






static void Enter (object id)




Console.WriteLine (id + ” wants to enter”);




Console.WriteLine (id + ” is in!”);           // Only three threads


Thread.Sleep (1000 * (int) id);               // can be here at


Console.WriteLine (id + ” is leaving”);       // a time.






1 wants to enter


1 is in!


2 wants to enter


2 is in!


3 wants to enter


3 is in!


4 wants to enter


5 wants to enter


1 is leaving


4 is in!


2 is leaving


5 is in!




Basically, we all have a difficult choice to make whenever doing multi-threaded programming. We can (1) automatically release locks upon exceptions, exposing inconsistent state and living with the resulting bugs (bad) (2) maintain locks upon exceptions, deadlocking the program (arguably often worse) or (3) carefully implement the bodies of locks that do mutations so that in the event of an exception, the mutated resource is rolled back to a pristine state before the lock is released. (Good, but hard.)


This is yet another reason why the body of a lock should do as little as possible. Usually the rationale for having small lock bodies is to get in and get out quickly, so that anyone waiting on the lock does not have to wait long. But an even better reason is because small, simple lock bodies minimize the chance that the thing in there is going to throw an exception. It’s also easier to rewrite mutating lock bodies to have rollback behaviour if they don’t do very much to begin with.


And of course, this is yet another reason why aborting a thread is pure evil. Try to never do so!




Recall that lock(obj){body} was a syntactic sugar for




// Your code here…}finally{




However, keep in mind that Monitor can also聽Wait()聽and聽Pulse(), which are often useful in complex multithreading situations.


The problem here is that if the compiler generates a no-op instruction between the monitor enter and the try-protected region then it is possible for the runtime to throw a thread abort exception after the monitor enter but before the try. In that scenario, the finally never runs so the lock leaks, probably eventually deadlocking the program. It would be nice if this were impossible in unoptimized and optimized builds.




However in C# 4 its implemented differently:


bool lockWasTaken = false;


var temp = obj;


try {


Monitor.Enter(temp, ref lockWasTaken);


//your code








if (lockWasTaken)








implicit in this codegen is the belief that a deadlocked program is the worst thing that can happen. That’s not necessarily true! Sometimes deadlocking the program is the better thing to do — the lesser of two evils.


The purpose of the lock statement is to help you protect the integrity of a mutable resource that is shared by multiple threads. But suppose an exception is thrown halfway through a mutation of the locked resource. Our implementation of lock does not magically roll back the mutation to its pristine state, and it does not complete the mutation. Rather, control immediately branches to the finally, releasing the lock and allowing every other thread that is patiently waiting to immediately view the messed-up partially mutated state! If that state has privacy, security, or human life and safety implications, the result could be very bad indeed. In that case it is possibly better to deadlock the program and protect the messed-up resource by denying access to it entirely. But that’s obviously not good either.




Signaling with Event Wait Handles


Event wait handles are used for signaling. Signaling is when one thread waits until it receives notification from another.


The most important difference is that an AutoResetEvent will only allow one single waiting thread to continue. A ManualResetEvent on the other hand will keep allowing threads, several at the same time even, to continue until you tell it to stop (Reset it).


The ManualResetEvent is the door, which needs to be closed (reset) manually. The AutoResetEvent is a tollbooth, allowing one car to go by and automatically closing before the next one can get through.


Just imagine that the AutoResetEvent executes WaitOne() and Reset() as a single atomic operation.


CountdownEvent lets you wait on more than one thread. The class is new to Framework 4.0 and has an efficient fully managed implementation.




Comparison of Signaling Constructs

ken to signal and wait on the construct once on the same thread (assuming no blocking), as measured on an Intel Core i7 860.


AutoResetEvent object can call Set on it to release one blocked thread.


You can create an AutoResetEvent in two ways. The first is via its constructor:


var auto = new AutoResetEvent (false);


(Passing true into the constructor is equivalent to immediately calling Set upon it.) The second way to create an AutoResetEvent is as follows:


var auto = new EventWaitHandle (false, EventResetMode.AutoReset);


In the following example, a thread is started whose job is simply to wait until signaled by another thread:


class BasicWaitHandle{


static EventWaitHandle _waitHandle = new AutoResetEvent (false);




static void Main()




new Thread (Waiter).Start();


Thread.Sleep (1000);                  // Pause for a second…


_waitHandle.Set();                    // Wake up the Waiter.






static void Waiter()




Console.WriteLine (“Waiting…”);


_waitHandle.WaitOne();                // Wait for notification


Console.WriteLine (“Notified”);




Waiting… (pause) Notified.




Wait Handles and the Thread Pool


If your application has lots of threads that spend most of their time blocked on a wait handle, you can reduce the resource burden by calling ThreadPool.RegisterWaitForSingleObject. This method accepts a delegate that is executed when a wait handle is signaled. While it’s waiting, it doesn’t tie up a thread:


static ManualResetEvent _starter = new ManualResetEvent (false);


public static void Main(){


RegisteredWaitHandle reg = ThreadPool.RegisterWaitForSingleObject


(_starter, Go, “Some Data”, -1, true);


Thread.Sleep (5000);


Console.WriteLine (“Signaling worker…”);






reg.Unregister (_starter);  

 // Clean up when we’re done.



public static void Go (object data, bool timedOut){


Console.WriteLine (“Started – ” + data);


// Perform task…}






Software Maintenance and Support


This article is a collection of notes and references from other web sites for the self study of Single Page Applications and Angular. The list of source web sites referred are mentioned in the “References” section of this article.


“Begin with the end in mind”

  • Maintenance and support will continue for the life of your software system. A significant portion of the system’s life-cycle budget will be consumed by these tasks. In fact, experts estimate that Maintenance can eventually account for 40 to 80% of the total project cost. 
  • Software does not “wear out” but it will become less useful as it gets older, plus there WILL always be issues within the software itself.
  • Software maintenance cost is derived from the changes made to software after it has been delivered to the end user.


What is Maintenance?


This phase of the software lifecycle consists of the tasks required to keep your system operational after it is delivered into Production.


The different types of maintenance tasks are described as:

  1. Corrective – Updates that are required to correct or fix problems. (generally 20% of software maintenance costs)
  2. Perfective – Modifications that enhance or improve the functionality or performance of the software. This includes new user requirements. – costs due to improving or enhancing a software solution to improve overall performance (generally 5% of software maintenance costs)
  3. Adaptive – Software modifications that are required due to environmental changes (eg. upgrade to operating system) – costs due to modifying a software solution to allow it to remain effective in a changing business environment (25% of software maintenance costs)
  4. Preventative – This corrects potential flaws or problems in the software before they become effective.
  5. Enhancements – costs due to continuing innovations (generally 50% or more of software maintenance costs) Same as Perfective ?

What is Support?

Support refers to the assistance given to users to address their problems and queries after system implementation.

Effort Estimation

Key parameters considered while estimating the efforts required are as below

  • The industry and application type
  • Size of the application
  • Platform Types.
  • Programming language used
  • Effort spent on different maintenance activities
  • Effort spent on different support activities
  • The number and types of defects found during the maintenance period
  • Average time is taken to repair defects
  • Calls to Help Desk
  • Team size

Approaches to take over projects in production

Requirement Elicitation

  • The term elicitation is used in books and research to raise the fact that good requirements cannot just be collected from the customer, as would be indicated by the name requirements gathering. Requirements elicitation is non-trivial because you can never be sure you get all requirements from the user and customer by just asking them what the system should do OR NOT do (for Safety and Reliability). Requirements elicitation practices include interviews, questionnaires, user observation, workshops, brainstorminguse cases, role playing and prototyping

Detect patterns and the general structure of an application at a high level

  • Reverse analysis the source code

    The architecture tools and Static Code Analysis in Visual Studio Ultimate help us to visualize the organization, relationships, design patterns and behavior of existing applications

  • Generate sequence diagrams from the existing code and get required interfaces for each component
  • Sequence diagrams help to asses the impact of changes

Improve productivity and quality through Automation

  • Automated Live Unit Testing with VS
  • Automatically runs the impacted unit tests in the background as you type and provides real-time feedback

Use Agile or Iterative Waterfall SDLC

  • Produces working software early during the lifecycle
  • The focus is on delivering a sprint of work
  • Deliver series of valuable/shippable features/projects
  • Lowrisk
    • Low risks factors as the risks can be identified and resolved during each iteration.
    • if one project goes wrong, it would not impact another project
  • Flexible
    • More flexible as scope and requirement changes can be implemented at low cost
Questions / Metrics to be clarified

  1. Size of each application or module
Application Number of Modules Number of



Number of  Scheduled Batches Number of  Integrations to External applications

The number and types of defects found in a year

Classification Priority No of issues found
Standard Critical (P1)
High (P2)
Medium (P3)
Low (P4)

List of different .NET languages, Databases used with the applications.

Technology / Languages Number of applications
SQL Server

List of different .NET frameworks used with the applications.

Technology / Languages Number of applications
ASP.NET – Web Forms
Entity Framework
Any other technologies

Third-party applications or packages integrated with the applications

Third Party integrations Number of applications
CRM (Siebel, Vantive, Remedy, SharePoint, Documentum etc.)
BI / OLAP / DW Tools

(ETL, Data Stage, Sagent, Informatica,

SAS, Ab Initio)

ERP Skills (Peoplesoft, SAP,

Oracle Applications etc.)

Software development life cycle models used

SDLC Number of applications Number of releases in a year
Iterative Waterfall
Any other SDLC methods

Type of Integration and deployment methods used (to estimate the efforts to deliver the build to different environments

Integration and Deployment Number of applications
Automated deployments only
Manual Integration and deployment

Availability of the documents in English

  • Architecture documents, HLD, LLD, User guides, deployment documents etc


  5. SDLC Models
  6. Visual Studio Architecture Tooling Guide Scenarios

SPA -Part 1:JavaScript to Angular


This article is a collection of notes and references from other web sites for the self study of Single Page Applications and Angular. The list of source web sites referred are mentioned in the “POINTS OF INTEREST” section of this article.


  • Single-Page Applications (SPAs) are Web apps that load a single HTML page and dynamically update that page as the user interacts with the app.
  • SPAs use AJAX and HTML5 to create fluid and responsive Web apps, without constant page reloads. However, this means much of the work happens on the client side, in JavaScriptThe Traditional Page Lifecycle vs. the SPA Lifecycle
  • In a traditional Web app, every time the app calls the server, the server renders a new HTML page. This triggers a page refresh in the browser.
  • An SPA renders only one HTML page from the server, when the user starts the app. Along with that one HTML page, the server sends an application engine to the client. The engine controls the entire application including processing, input, output, painting, and loading of the HTML pages.
  • In an SPA, after the first page loads, all interaction with the server happens through AJAX calls. These AJAX calls return data—not markup—usually in JSON format. The app uses the JSON data to update the page dynamically, without reloading the page.
  • Benefits
    • One benefit of SPAs is obvious: Applications are more fluid and responsive, without the jarring effect of reloading and re-rendering the page.
    • Typically, 90–95 percent of the application code runs in the browser; the rest works in the server when the user needs new data or must perform secured operations such as authentication. Because dependency on the server is mostly removed, an SPA autoscales in the Angular 2 environment: No matter how many users access the server simultaneously, 90–95 percent of the time the app’s performance is never impacted.
    • Sending the app data as JSON creates a separation between the presentation (HTML markup) and application logic (AJAX requests plus JSON responses).
    • In a pure SPA, all UI interaction occurs on the client side, through JavaScript and CSS. After the initial page load, the server acts purely as a service layer.

ECMAScript Vs TypeScript

ECMAScript (or ES) is a trademarked scripting-language specification standardized by Ecma International in ECMA-262 and ISO/IEC 16262.

  • It was created to standardize JavaScript, so as to foster multiple independent implementations. ECMAScript is the language, whereas JavaScript, JScript, and even ActionScript 3 are called “dialects”.
  • ES5 is the JavaScript you know and use in the browser today. ECMAScript version 5 was finished in December 2009,  the latest versions of all major browsers (Chrome, Safari, Firefox, and IE)  have implemented version 5. So ES5 does not require a build step (transpilers) to transform it into something that will run in today’s browsers.
  • Coders commonly use ECMAScript for client-side scripting on the World Wide Web, and it is increasingly being used for writing server applications and services using Node.js.
  • TypeScript is a strongly typed, object oriented, compiled language. It was designed by Anders Hejlsberg (designer of C#) at Microsoft. TypeScript is both a language and a set of tools. TypeScript is a typed superset of JavaScript compiled to JavaScript. In other words, TypeScript is JavaScript plus some additional features.

  • As data types are introduced in Typescript below concepts are applicable
    • Variable declaration with datatypes
    • Type conversion/casting
    • Function overloading
    • Generics
  • As OOPs are inbuilt in Typescript below concepts are applicable
    • Class declarations with private, public, protected, static members
    • Constructor
    • Inheritence, Overriding, Interfaces, this and base operators

OOPs in ECMA Vs TypeScript

  • An object is an unordered list of name-value pairs. Each item in the list can be a property or methods.
  • JavaScript does not have classes. The classes in ES2015 are just a cleaned up syntax for setting up prototype inheritance between objects.
  • ECMA5 has a prototype-based inheritance mechanism: Every JavaScript function has a prototype property, and you attach properties and methods on this prototype property when you want to implement inheritance (to make those methods and properties available to instances of that function).This prototype property is not enumerable; that is,only one object can be assigned.
  • constructor is a function used for initializing new objects, and you use the new keyword to call the constructor.

In ES5 and earlier, constructor functions defined “classes” like this:

function Person(firstName, lastName) {
  this.firstName = firstName;
  this.lastName = lastName;
var person = new Person("Bob", "Smith");

ES2015 introduces a new syntax using the class keyword:

// the name of the ES5 constructor
// function is name of the ES2015 class
class Person {

  // observe there is no "function" keyword
  // also, the word "constructor" is used, not "Person"
  constructor(firstName, lastName) {

    // this represents the new object being
    // created and initialized
    this.firstName = firstName;
    this.lastName = lastName;

var person = new Person("Bob", "Smith");
// TypeScript

class Person {
  firstName: string;
  lastName: string; 
  constructor(firstname:string, lastname:string ){
  this.firstName = firstname;
  this.lastName = lastname; 

var person = new Person("Mary", "Smith", 39);

Hello World from TypeScript

<!DOCTYPE html>
<html lang="en">
    <title>TypeScript HTML App</title>
    <script src="app.js"></script>
    <div id="content"></div>

class Greeter {
    element: HTMLElement;

    constructor(element: HTMLElement) {
        this.element = element;
        this.element.innerHTML = "Hello World";

window.onload = () => {
    var el = document.getElementById('content');
    var greeter = new Greeter(el);

//Tranaspiled code
var Greeter = (function () {
    function Greeter(element) {
        this.element = element;
        this.element.innerHTML = "Hello World";
    return Greeter;

window.onload = function () {
    var el = document.getElementById('content');
    var greeter = new Greeter(el);

Points of Interest

You can explore more on the objects and OOPs in Javscript in below references.


Version 1.0 – 2017  June 21 – Initial