Saturday, September 30, 2017

Pair Programming

Pair Programming is an agile technique of software development involving the artistry of two programmers working closely together in order to facilitate knowledge transfer between them and increase the software quality in the process. Both parties have distinct responsibilities such that when one code, the other party acts as an observer and counter checks the coding.



Various Pair Programming styles include:

The Grant: Commander or Backseat driver; the process operates such that the junior pair is the driver while the much informed colleague would either take the commander or backseat.

The rally: Both the pair members have good technical know-how and a deep understanding of the programming processes. They know each other well and work in tandem, much like a rally driver works together with his navigator.

The tour: One pair takes control of the whole process guiding the other on several programming aspects. It can be related to a driver guiding a navigator who has lost control of the navigation.

Disconnected pair: One pair proceeds even after the other one is on a complete shutdown, such that, while the team might have started on the right foot, somewhere on the way one of them could lose focus and the other will hence assume the driver and navigator’s role to complete the programming process based on his expertise to do so. 

Benefits of Pair Programming

Previous studies indicate that amongst the most effective ways for improving the quality in programming is the coding review methodology. The quality check is often conducted before, during and after the coding. As a result, quality attainment is achieved by removal of operational impediments such as logical errors. Furthermore, the process infers a chance to generate better application software. The pair can also explore other prospects which could lead to new solutions. 

On the other end, individual programmers operate by using the codes devised by other experts. Thus, they may not have an opportunity to do a thorough audit of the systems either for lack of expertise or any other reasons. 

Information is essential to humanity. Bert Markgraf explains that the exploitation of information at organizational levels should be exploited from different perspectives. 

My take on pair programming is that, through the process, the pair programmers will be able to learn from each other in terms of technical skills and work experiences. Some of the exchanges the team may acquire from the process include domain operations, code basing or structuring, coding frameworks such as (usage, discovery, and common code contributions from either the third party or internal sources), and also the programming language and standards. In this perspective, programmers are versed with both new and additional skills which hence boost their work efficiency.

Considering that the pair tactic facilitates a trust working environment, it would be easy to attain teamwork through team building. The team building aspect is orchestrated with a consideration that the pair consistently works together. 

Setbacks of Pair Programming

Regardless, of its benefits, some software developers still don’t take it as a good idea. Some programmers prefer working independently as opposed to being in a group with the excuse being they don’t enjoy conceptualizing a loud or rather they require some time to go through the codes and also reason by themselves.  

The underlying challenge is ordinarily with those programmers with poor communicational skills. Thus adoption of this method could negatively impact on quality of the end product. Additionally, constant communication breakdowns could lead to loss of time, and also undermine team building or teamwork. For the solo programming, a balance can be bridged to enable a teamwork environment, through activities such as labor distribution or understanding one’s field of specialization.

Considering that pair programming requires a good synchronization of the pair members, whereby they have to begin and stop engaging within and at the same moment, and have a break together including the day-offs together, the absence of either party to the pair would startle the process. Moreover, the replacement of the missing partner may result in a lot of time wastage since the replaced partner would require some training and adjustments to relate with the existing partner. 

How companies can leverage Pair Programming for knowledge transfer

In this aspect, the concern of companies relying on pair programming to influence the transfer of knowledge between developers remains relevant. The process of Pair Programming by itself entails an interaction between two developers of an organization. As deduced from a review by Franz Zieris and Lutz Prechelt, Pair programming’s dependability is highly inclined on the viability to promote the comprehension of technical know-how on attaining certain aspects in application development such as code sequencing. 

Gerardo Canfora and his team acknowledge that companies can achieve a strategic fit by sharing the Software engineering process knowledge amongst its developers. Regardless, team Gerardo underlines that the biggest impediment to the whole process would only emanate from personal abilities or attitude to undertake in the exercise. The fact that either party has to conversely explain to each other, shifts this whole conversation to the scopes of the success of the entire process of knowledge transfer which is mainly built on effective communication acclamations. On that note, the leverage on pair programming to deliver knowledge transfer amongst developers should be considered on presumption such that the process can act as training, information sharing, and team building affairs for the involved pairs. 

Conclusion

Pair Programming presents an interesting approach to increase software quality and knowledge transfer between developers. Both parties need to trust each other’s abilities and must be able to communicate with each other at an equal level in order to leverage the full potential of this technique.

Sources





Saturday, September 23, 2017

SOLID - 4. The Interface Segregation Principle

Next up in the series about the five SOLID principles, let's take a look at the Interface Segregation Principle (ISP). 


You can read up on my other articles on SOLID here:


The Interface Segregation Principle


The Interface Segregation Principle addresses the cohesion of interfaces and says that clients should not be forced to rely on methods they do not use. To have a cohesive and reusable class, we must give it a single responsibility. But sometimes, even this single responsibility can be broken into even smaller responsibilities, making your interface more user friendly.

Let's see how to better clarify the concept with an example. Suppose we had defined the interface ScrumTeamMember as follows:

public interface IScrumTeamMember
{
    void PrioritizeBacklog();
    void ShieldTeam();
    void DevelopFeatures();
}

And then we add the Developer, ScrumMaster, and ProductOwner classes implementing the ScrumTeamMember interface:

public class Developer : IScrumTeamMember
{
    public void PrioritizeBacklog() {}
    public void ShieldTeam() {}
    public void DevelopFeatures()
    {
        Console.Writeline("Developing new features");
    }
}

public class ScrumMaster : IScrumTeamMember
{
    public void PrioritizeBacklog() {}
    public void ShieldTeam()
    {
        Console.Writeline("Shield Scrum Development Team");
    }
    public void DevelopFeatures() {}
}

public class ProductOwner : IScrumTeamMember
{
    public void PrioritizeBacklog()
    {
        Console.Writeline("Prioritize backlog items");
    }
    public void ShieldTeam() {}
    public void DevelopFeatures() {}
}

When we create a generic interface, we end up causing an implementation, in the Developer case, not to use certain interface methods. This is what happens with the PrioritizeBacklog and ShieldTeam methods, which do nothing because they are not attributions of a Developer, but of ProductOwner and ScrumMaster, respectively.

Problems

Suppose that some change is required in the ShieldTeam method, which now needs to receive some parameters. Thus, we are required to change all implementations of ScrumTeamMember (Developer, ScrumMaster and ProductOwner) because of a change that should only affect the ScrumMaster class.

In addition, client-side classes that depend on ScrumTeamMember will have to be recompiled and if they are in several components they will have to be redistributed aswell. Sometimes unnecessarily, because they did not even use the ShieldTeam method.

Another problem is that implementing useless methods (degenerates) can lead to the violation of the Liskov Substitution Principle, since someone using ScrumTeamMember could assume the following:

foreach (var member in scrumTeamMembers)
{
    member.DevelopFeatures();
}

However, we know that only a Developer performs the above behavior. If the list also had objects of type ScrumMaster or ProductOwner, these objects would not be doing anything, or worse, could trigger some exception, if the implementation of them did so.

Resolving the ISP violation

The solution to the above example is to create more specific interfaces so that each client class depends only on what it actually needs. For example:

public interface IScrumMasterFunction
{
    void ShieldTeam();
}

public class ScrumMaster : IScrumMasterFunction
{
    public void ShieldTeam()
    {
        Console.Writeline("Shield the Scrum Development Team");
    }
}

With the above change, the ScrumMaster concrete class no longer needs to implement unnecessary methods, and other classes that depended on ScrumTeamMember only to use ShieldTeam may now depend on the ScrumMasterFunction interface.

The same idea can be applied to the specific functions of Developer and ProductOwner, so that all the ScrumTeamMember clients can now depend specifically on the interfaces they use.

Conclusion

The Interface Segregation Principle alerts us of dependencies on "fat interfaces," forcing concrete classes to implement unnecessary methods and causing a large coupling between all clients. By using more specific interfaces, we break this coupling between client classes.

In addition, Interface Segregation Principle helps us increase the granularity of our objects, increasing the cohesion of their interfaces and drastically reducing coupling. This improves the maintenance of our code, since simpler interfaces are easier to understand and implement.

Friday, September 15, 2017

SOLID - 3. The Liskov Substitution Principle

Continuing the series about the five SOLID principles, today I invite you to explore the Liskov Substitution Principle (LSP). 

Remember all FIVE principles, and the meaning of the SOLID acronym:


You can read up on my other articles on SOLID here:


The Liskov Substitution Principle


This principle takes its name Barbara Liskov who first presented it at a conference in 1987.

The most commonly used definition says:

"Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it."

That is a simpler way of explaining the formal definition of Liskov:

"If for every object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is replaced by o2 then S is a subtype of T".

A subclass should overwrite the methods of the parent class in such a way that it does not break functionality from the client's point of view.

Lets assume we want to program a race simulation with different kinds of vehicles:

public abstract class Vehicle
{
    public abstract void StartEngine();

    public abstract void SetDriver(string driver);       
}

public class Truck : Vehicle
{
    public override void StartEngine()
    {
        //..
    }

    public override void SetDriver(string driver)
    {
        //..
    }
}

public class Motorbike : Vehicle
{
    public override void StartEngine()
    {
        //..
    }

    public override void SetDriver(string driver)
    {
        //..
    }
}

public class RaceSimulator
{
    private IList<Vehicle> _vehicles;
    private IList<string> _drivers;

    //..

    public void Initialize()
    {
        foreach (string driver in _drivers)
        {
            Vehicle v = GetRandomVehicle();
            v.SetDriver(driver);
            v.StartEngine();                
        }
    }
}

This code looks good and works fine until we introduce a new type of vehicle:

public class Bicycle : Vehicle
{
    public override void StartEngine()
    {
        throw new NotImplementedException();
    }

    public override void SetDriver(string driver)
    {
        //..
    }
}

Here we created a violation of the LSP. A bicycle is also a type of vehicle, but since it has no engine it behaves different to the other vehicles in the example. This is why when you are thinking about inheritance you should not only think about if B is a type of A but more importantly if B behaves like A does.

In addition, this violation may result in a violation of the Open/Closed principle, causing all other consequential problems of it:

public void Initialize()
{
    foreach (string driver in _drivers)
    {
        Vehicle v = GetRandomVehicle();
        v.SetDriver(driver);

        if (!(v is Bicycle))
        {
            v.StartEngine();
        }
    }
}


Conclusion

The LSP is an extension of the Open/Closed Principle. It ensures that new derived classes are extending the base classes without changing their behavior.

When meeting the Liskov Substitution Principle, derived classes are replacable by their base classes and any code that uses the base class will be able to meet the Open/Closed Principle, facilitating high maintainability.

Friday, September 1, 2017

Test Driven Development (TDD)

Many companies follow a rather traditonal model of software development and quality assurance, where the features of the software are implemented based on business requirements and then tested once the implementation has been completed.

Even in an agile development cycle, errors are often found late and the test coverage of the projects are low. Test Driven Development (TDD) is a practice that tries to address these issues.

The TDD Cycle

The approach is simple: write a test that fails, pass it in as simple a way as possible and then refactor the code. This cycle is known as the Red-Green-Refactor cycle.





When practicing TDD, before implementing any actual program code, the developer formalizes the desired behavior of a given feature by writing an automated test for it. The test is nothing more than a piece of code that makes it clear what a particular piece of the software should do.

When run, the test fails because the actual functionality has not yet been implemented. The developer then works to get this test to pass by implementing the required functionality with as little code as possible.

The Red-Green-Refactor cycle is repeated as many times as necessary, until the feature has been completed.

By creating the test case before implementing the unit, problems such as misunderstanding of requirements or interfaces are reduced and at the end of the development cycle all implemented functionality is guaranteed to have test coverage.

To write the initial tests, the developer must understand in detail the specification of the system and the business rules. In addition, the tests should follow the FIRST model:

  • F (Fast) - Tests must be fast, because they will be run all the time.
  • I (Isolated) - Tests may not depend on each other, so they can be run in any order.
  • R (Repeatable) - When a test is run multiple times, it must have the same results.
  • S (Self-verifying) - The test must check by itself if it has passed with no human interaction.
  • T (Timely) - Tests must be written about the same time as the code that is being tested (When practicing TDD, the tests must be written first!)


Advantages of Test Driven Development

The software is being tested constantly through the development process and is guaranteed to have a very high test coverage. Developers gain an understanding of the business rules early in the development cycle, because it’s a requirement to be able to write the test cases.

The ready-made tests will validate if any modifications did not create problems in the business rules that were already in operation (Regression).  


What makes the difference between practicing TDD and writing tests later?

The main reason for increased software quality is not the TDD practice by itself, but the automated tests, produced through it. The common question is then: What is the difference between doing TDD and writing the test later?

The developer gets feedback from the test. The difference is precisely in the amount of feedback. When the developer writes the tests only after the implementation of the production code, a lot of time already passed without the developer getting any feedback on it. The earlier the developer receives feedback, the better. When one has too much code already written, changes can be cumbersome and costly. Conversely, the less written code, the lower the cost of change. And that's exactly what happens with TDD practitioners: they get feedback at a time when change is still cheap.

The other difference is that when following TDD a high test coverage is guaranteed, whereas a traditional approach might leave you with only a few tests when time runs short. 

Conclusion

TDD is an interesting method to skyrocket your projects test coverage and increase code quality. It integrates well into agile development processes aiming to anticipate problems early and making the project less susceptible to failure.

It comes at a cost, since most developers don’t have experience with it and thus if you plan to enroll it in your workplace it will require some training and willingness of everybody involved to try something different.