Quantcast
Channel: Infragistics Community
Viewing all 2398 articles
Browse latest View live

Developer News - What's IN with the Infragistics Community? (2/8-2/21)

$
0
0

Have a few minutes of down-time today? Why not check out the hot topics from the past 2 weeks, as identified by the elite developer community here at Infragistics?

5. Responsive Web Design: What the Internet Looks Like in 2016 (Canva)

4. What Should I Make? Beginner Programming Project Ideas (Programming for Beginners)

3. Why You Should Avoid Job Titles Like “Coding Ninja” (And What You Should Do Instead) (LinkedIn)

2. Useful Apps To Learn Design And Coding On Your Smartphone (Forbes)

1. What Program Languages Should I Use? (Java, C#, C++, or HTML5) (CodeProject)


Wireframing and Deadpool

$
0
0

You know wireframes. The white-gray-black series of doodles of your website, mobile app, rich app - that make the process proper, eliminate logical flaws and ultimately save project time.

Cool. Except, sometimes they don’t.

Wireframes, even the interactive ones, don’t make much sense to someone who hasn’t dealt with wireframes before, or doesn’t have the time, ability or desire to understand them. And often times, that someone is the project’s decision maker.

boring wireframe

The Curious Case of the Boring Wireframe.

 

What can we do?

Too often, wireframes are as dull and unengaging as the plot of Deadpool - one guy takes revenge and kills a truckload of people in the meantime. Nothing remarkable, we have seen that a million times.

But Deadpool is awesome. What makes it great are the punchlines flying around all the time. References to other Marvel movies (e.g. X-Men), unrelated flicks and series (e.g., Homeland), mockery and self-irony (e.g., Ryan Raynolds’ acting abilities) give life to the basic structure of the movie.

Here’s a crazy idea: How about we make the person looking at our wireframes enjoy themselves, so they actually spend more time on it?

deadpool wireframing

Deadpool nailing wireframing. Such headline. Much wow.

How about we include cultural references here and there. For example:

A) Turn to advertising. Let the professional copywriters do your job. Include a famous TV commercial slogan (try „Just do it“ or „Do the Dew“ for your good old action button);

B) Pay tribute to screenwriters and songwriters. Replace the „lorem ipsum“ paragraphs with the lyrics of a song or a quote from a movie character.

Spice up your wireframe with paragraph text from Breaking Bad

C) Movie posters have it all. Is the company you’re making a website for selling soap? Try the headline of Fight Club „Mischief, mayhem, soap“. It sticks.

A few words of advice,

should you go down this path of the dark side:

1. Make sure you use cultural references that your audience will understand (see what I did in the previous sentence?) and not be offended by.

2. Don’t confuse people. This is a wireframe/prototype that should make sense to the person looking at it - if it’s a button for „go“ (Start), don’t write „Gone in 60 Seconds“. It won’t make sense.

3. Don’t spend twice as much time on the wireframe/prototype just because you want to sound smart. It’s not about you. It’s about the wireframe and saving time for the project.

4. Don’t overdo it. You don’t need to make every line a punchline. Have one killer one and leave it at that.

5. Know your audience. If your client is a serious enterprise investing millions in the project, be careful. There are the people who love Deadpool, and there are those who just think it’s just not serious enough.

6. Use what’s given to you. This approach is used to replace lorem ipsum / dummy texts. If someone made the effort to give you the text that’s actually going to be in the final product - use it. Don’t change text for fun’s sake - you’re not a comedian.

So, will this approach make any dull wireframe awesome?

Heck no — it’s your job to make a great, easy-to understand, breeze-to-click-through wireframe that helps rather than confuses the audience. But pouring some heart and soul and a few funny lines can give that excellent wireframe of yours an extra edge. To infinity and beyond! Let me know if you’ve tried this approach, and whether it has done miracles for you, or you've lost that million-dollar contract because of too much fun you had while prototyping.

5 of The Best SharePoint Pro Twitter Accounts

$
0
0

Do you think Twitter is all about procrastination? You’d be wrong. Following a wide range of Twitter users, sourcing information from a variety of accounts and really engaging with posts can have considerable benefits for your professional life.

A recent five year study taken by over 200 Twitter users discovered some fascinating insights into the impact the social network has had on their professional lives. Twitter offered a range of benefits to the study’s respondents, yet by far the most important was that they were more likely to innovate and suggest new ideas. The study discovered that the more diverse a person’s network of friends and sources on Twitter, the more innovative their ideas tended to be.  

Twitter is all about sharing news, ideas and cutting-edge stories. It’s therefore no surprise that by accessing all this information first, you’ll be able to act on it in your professional life.

The report’s authors highlighted two specific ways Twitter increases our ability to assimilate – and then use – all this new information:

1.   Idea scouting– when you identify new ideas by following experts on Twitter.

2.   Idea connecting– when you see new ideas on Twitter and relate to how these could be used in your own organization as a business opportunity. Typically, this will involve sharing the new idea with the most appropriate stakeholders in the business.

It’s clear that when used wisely, Twitter can give a real boost to your career. So, if you’re a developer who works with SharePoint, how can you use Twitter to scout and connect new ideas? By following some of the experts in our list of the five best SharePoint pro Twitter accounts, that’s how!

1. Office 365 Community

Office 365 Community is a developer focused Twitter feed from Microsoft. It provides all the latest news and updates in technical aspects of Office 365 and SharePoint. By following Office 365 community you’ll be informed about:

  • The latest news about Microsoft events around the world.
  • General community information.
  • Information on previews and other releases.
  • Information about webinars, Twitter-jams and Q&A’s.

How will it help you?

Office 365 Community can give you direct insight into the latest news relating to SharePoint and related products. You’ll have an idea of what’s going on before anyone else and will be able to impress your boss with knowledge on new products and patches.

2. Chris O'Brien

SharePoint MVP, blogger and all-round expert; UK-based Chris O’Brien is definitely one to follow. With years of experience as a SharePoint consultant, and making regular appearances at Microsoft and SharePoint events, Chris is a big deal in the SharePoint dev world. By following Chris you’ll get all the latest on:

  • Reports and news on problems and fixes.
  • Responses to your burning SharePoint dev questions.
  • The latest news on webinars and related events.
  • Hot-off-the-press blogs by Chris himself.

How will it help you?

Following Chris will provide you with insights into SharePoint development and you’ll be made aware of Chris’ latest blogs as soon as they’re released. From great in-depth blogs to an insightful how-to, Chris shares plenty of his expert knowledge. Who wouldn’t want to access that?

3. Jeff Teper

As corporate vice-president of SharePoint and OneDrive, Jeff Teper is about as close to the SharePoint action as can be. He has enormous expertise and insight into the product’s future, which makes following Jeff a wise idea. Expect doses of:

  • Insights into the world of Microsoft.
  • A lot of love for developers!
  • The latest news on product development.

How will it help you?

We highlighted Jeff in our recent ‘SharePoint Experts’ post. With Jeff’s updates popping up in your newsfeed, you’ll always have the latest authoritative insights into the future of SharePoint - extremely useful if you want to know the direction SharePoint is headed.

4. Vesa Juvonen

Vesa Juvonen is a Microsoft-certified solution master and Senior Program Manager for SharePoint and Office 365. What this means is that the Helsinki native spends his days developing the vision for the future of SharePoint. You need to follow Vesa for:

  • Up-to-the-minute product release details.
  • Information about new version updates.
  • Insights into changes, plans and policy in Redmond.
  • New podcasts and webinars.

How will it help you?

Put simply, following Vesa will give you all the latest news right from the heart of SharePoint and Office 365. As a core member of the Office 365 Dev Patterns and Practices team, which provides regular blogs, documentation and best practice advice for developers, you’ll know how and where to get the latest updates and make your environment the best it can be.

5. Matthias Einig

An MVP and founder of influential development tools company Rencore, Matthias is a force to be reckoned with in the SharePoint world. Tweeting regularly on a wide range of news and updates, Matthias engages with the community about:

  • App development
  • Tips and tricks to improve your environment
  • Interesting and noteworthy news and articles from other sources
  • Info on the latest events

How will it help you?

Following Matthias will provide real insight into best practice around SharePoint development, and introduce you to a range of new ideas and innovations – key to giving you the inspiration you need to improve your own environment.

Looking to extend SharePoint to mobile devices? Try a free demo of SharePlus Enterprise today, and see how much more you can achieve from SharePoint on iOS.

Stacked Area Charts and Mathematical Approximations

$
0
0

I've previously noted that I think stacked area charts are frequently used when a conventional line chart would be a better option. Here is the (fictional) example I used previously and the conventional line chart alternative.

In short, if you want people to be able to make reasonably accurate judgments of the magnitudes of the individual components, and how they change depending on some other variable (such as time), the conventional line chart design is almost always going to be the best option. The lack of a steady baseline for all but the bottom component makes this task difficult for the stacked area chart.

Stacked area charts can be useful if you want to illustrate an ordered sum of components that change with another variable. While previously I suggested how the cost of milk production from farm to shop might change with time might be suitable, here I'd like to consider something very different: selected mathematical series.

You're probably familiar with trigonometric functions like sine and cosine and you may also know about the exponential function and hyperbolic functions. It's fairly easy to draw graphs of these functions if you have a calculator of some sort. When tied up in complicated equations, these functions may become awkward to deal with. Consequently, alternative ways of approximating these functions can come in very handy.

The functions mentioned above are all analytic functions. What this really means is quite complicated to attempt to explain so I won't try to do so here. Instead I'll just stick to the following: these functions can all be written as a sum of powers of their argument (typically denoted x), that is, as a polynomial. Being explicit helps, so here is a way of rewriting the exponential function:

In a similar manner, here is another way of expressing the cosine function:

And here is the hyperbolic cosine function (typically written as cosh):

In general, to get an exact value for one of these functions using summation we need to sum to infinity. This is not the case at the origin where all but the first term will equal 0. Close to the origin we will also get a good approximation as x is small. But how close and how good? We can plot the first few terms of, for example, the exponential function expression and see. The black line in the GIF below shows the exact exponential function, the blue wedges show the result of adding more and more terms from the right-hand side of the equation (from the zeroth power of x up to the 8th) for the exponential function above. The translucent red wedge indicates the area not covered by the polynomial approximation.

Below about x=1 we can see that the first three terms of the polynomial are a pretty good approximation for the exponential function. To get a good approximation around x=3 we need to go up to the sixth or seventh power of x (i.e. seven or eight terms of the polynomial). As the GIF below shows, even going to the eighth power of x isn't sufficient around x=6.

We can look at the hyperbolic cosine function in a similar way, though there are no terms with odd powers of x.

As you might expect, when we look at large distances from the origin, we need more and more terms of the polynomial in order to closely match the exact function. At x=±6, adding up terms up to the 8th power of x is not sufficient to get a good approximation.

I think these are cases where stacked area charts can be of real use. We're genuinely interested in the progressive sums of components, not the individual parts and that's where stacked charts excel.

You probably noticed that I skipped over producing charts for the cosine function. That's because stacked charts fail. Why? Because successive terms have opposite signs. While including more and more terms in the polynomial approximation does get you closer and closer to the exact function, you can't show this as a simple stack because some terms add to the total while others subtract. This also a problem for the exponential function when x is negative: terms involving even powers of x will be positive while those involving odd powers of x will be negative. This is a purely visual issue that doesn't crop up when we plot lines instead of stacks.

Hopefully I've shown that stacked area charts can be useful when it is the ordered sums of components that are of interest and if the conditions are right. For the conditions to be right then all components of the stack must share the same sign (or be 0) at each (visible) point along the horizontal axis.

Bring high volumes of complex information to life with Infragistics WPF powerful data visualization capabilities! Download free trial now!

UXify 2016 | Migrating from Desktop to Web and Mobile

$
0
0

UXify US is an annual half day conference about designing great digital experiences.

This year’s fourth annual user experience conference, UXify, brings together the community of academics, practitioners, technologists, and business leaders for a conversation about Migrating from Desktop to Web and Mobile.

Thought leaders from across the East coast will be discussing applied design, UX, content strategy and development at Infragistics ultra-sleek, central NJ-based headquarters.

We invite you to join us! It's an excellent opportunity to network, learn, share knowledge and gain new insights!

Eventbrite - UXify US 2016 - Migrating from Desktop to Web and Mobile

When

Saturday, April 9

Noon - 5PM

Where

Infragistics Headquarters

2 Commerce Drive

Cranbury, NJ 08512

Eventbrite - UXify US 2016 - Migrating from Desktop to Web and Mobile  

Why Should You Attend?

Designing great user experiences isn’t easy. It takes a great deal of knowledge, experience and passion. Now add the complexity inherent in legacy business applications and you start to get an idea how daunting it can be when the decision is made to take an existing desktop application and migrate it to the web.

Key to a successful migration is NOT responsive design. It’s an understanding of how the web-based environment (and access to it via mobile devices) changes the user’s interaction dynamic. Allow users to get up from their desks and suddenly you notice that they want to do their jobs (and interact with your application) in ways not previously considered (work from the train anyone?).

Selected 2015 Conference Presentations

[youtube] width="560" height="315" src="http://www.youtube.com/embed/g8P-v1fNgjc" [/youtube]

[youtube] width="560" height="315" src="http://www.youtube.com/embed/M3xYmeJQXpU" [/youtube]

[youtube] width="560" height="315" src="http://www.youtube.com/embed/Leau_mhtXnk" [/youtube]

Eventbrite - UXify US 2016 - Migrating from Desktop to Web and Mobile

-------------------------------------------------------

Kevin Richardson has been working in the area of user experience for 25 years. With a PhD in Cognitive Psychology, he has deep experience across business verticals.

On the weekends, you can find Kevin on his motorcycle, racing for Infragistics Racing.

Different ways of injecting dependency in an AngularJS Application

$
0
0

When you start learning the very first characteristics of AngularJS, you may come across something called Dependency Injection (DI): the premise that AngularJS injects dependencies whenever an application needs them. As a developer, our task is only to pass the dependency to the module and everything else will be taken care by AngularJS.

To create a controller, we pass $scope object and other dependencies to the module’s controller function. For example, to create a ProductController, we are passing $scope object and Calculator service dependencies. As a developer our job is to pass the dependencies and AngularJS will inject them whenever the application needs them.

As a developer, we really don’t care about how AngularJS injects dependencies – we don’t need to know how the injection process works to develop applications.  However, it is better if we know different ways of passing dependencies. In AngularJS, dependencies can be passed in three possible ways. They are as follows:

  1. Passing a dependency as Function Arguments
  2. Passing a dependency as Array Arguments
  3. Passing a dependency using the $inject service

Let us explore these options one by one.

 Passing a dependency as a Function Argument

Perhaps most of the time you pass a dependency as a function argument, which is perfectly fine. For example, we pass a $scope object to create a controller as shown in the listing below:

app.controller("ProductController", function ($scope) {

    $scope.message ="Hey I am passed as function argument"

});

Passing dependencies as function arguments works perfectly fine, until we deploy the application in the production with a minified version of the application. Usually to improve the performance, we minify the application in production, but passing the dependency as a function argument breaks when we minify the application.

Obviously in production, for better performance, we would like to deploy the minified version of the application, but the application will break because the parameter name will change to a shorter alias name. To avoid this break in production, we should choose another option.

Passing a dependency as Array Arguments

Perhaps the most popular way of passing a dependency in an AngularJS application is passing them as Array Arguments. When we pass a dependency as an Array Argument, the application does not break in production when we minify the application. We can do this in two possible ways.

  1. Using the Named function
  2. Using the Inline Anonymous function

Using the Named function

We can pass dependencies as Array Arguments with the named function as shown in the listing below:

var app = angular.module('app', []);function ProductController($scope) {
    $scope.greet ="Infragistics";
};
app.controller('ProductController', ['$scope', ProductController]);

As you’ll notice, we are passing a dependency $scope object in the array along with the name of the controller function. More than one dependency can be passed, separated by a comma. For example we can pass both $http service and the $scope object as dependencies as shown in the listing below:

var app = angular.module('app', []);function ProductController($scope,$http) {
    $scope.greet = $http.get("api.com");
};
app.controller('ProductController', ['$scope','$http', ProductController]);

As we discussed earlier, passing dependencies as Array Arguments does not break application when we minify the application.

Using the Inline Anonymous function

Personally I find using a named function much more convenient than using an inline anonymous function. For me, it’s easy to manage the named controller function. If you prefer the inline function, you can pass dependencies as array arguments exactly the same way you pass them in named controller functions. We can pass dependencies in an inline function as array arguments, as shown in the listing below:

var app = angular.module('app', []);

app.controller('ProductController', ['$scope', '$http', function ($scope,$http) {

    $scope.greet ="Foo is Great!"
}]);

Keep in mind that dependencies injected as Array arguments work even if we minify the application.

Passing a dependency using the $inject service

There is one more way to inject dependencies in AngularJS: by using the $inject service. In doing so, we manually inject the dependencies. We can inject $scope object dependencies using the $inject service as shown in the listing below:

function ProductController($scope){
    $scope.greet ="Foo is Not Great!5";
}

ProductController.$inject = ['$scope'];

app.controller('ProductController', ProductController);

Using the $inject service also does not break the application when we minify the application for production. Most often we will find $inject services being used to inject dependencies in unit testing of the controller.

Before we end this article, let us see how we can use $inject to inject a service to the controller in a real time application. We have created a service as shown in the listing below:

app.factory("Calculator", function () {return {
        add:function (a, b) {return a + b;
        }
    }
});

We need to use a Calculator service inside CalController. The CalController can be created as shown in the listing below:

app.controller('CalController', CalController);function CalController($scope, Calculator) {

    $scope.result =0;
    $scope.add =function () {
        alert("hi22");
        $scope.result= Calculator.add($scope.num1, $scope.num2);
    }
};

At this point, the application should work because dependencies are passed as function arguments. However, the application will break when we minify it. So let us go ahead and inject the dependencies using the $inject as shown in the listing below:

CalController.$inject = ['$scope', 'Calculator'];

On the view, the controller can used as shown below:

<div ng-controller="CalController"><input type="number" ng-model="num1" placeholder="Enter number 1"/><input type="number" ng-model="num2" placeholder="Enter number 2"/><button ng-click="add()">Add</button>
        {{result}}</div>

And there you have it: how to interject dependencies in AngularJS apps. I hope you find this post useful, and thanks for reading!

Top Developer Meetups: Baltimore, MD

$
0
0

Our series would be incomplete if we didn't take a moment to head to Maryland and dig into the meetups in Baltimore! Check out the infographic below and mark down a meeting date!

Share With The Code Below!

<a href="http://infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/2677.Baltimore_2C00_-MD.jpg "/> </a><br /><br /><br />Top Developer Meetups: Baltimore, MD <a href="http://www.infragistics.com/products/aspnet">Infragistics ASP.NET Controls</a>

Webinar Recap: Creating a Single Page Application using AngularJS and Web API

$
0
0

On February 19th we hosted a webinar titled “Creating a Single Page Application using AngularJS and Web API” for the Indian region and we’d like to share the presentation and recorded webinar with you now!

In the webinar, we covered everything you need to know to create your own SPA, including the ins and outs of Web APIs, how to expose CRUD operations on data, how to create different views for your SPA, and more. You can view the recording of the entire presentation here:

[youtube] width="560" height="315" src="http://www.youtube.com/embed/amNciEY29vQ" [/youtube]

You can also find presentation slides here.

Once again, thank you so much for your interest in our webinars – and we look forward to seeing you at a future webinar!


Access at Arm’s Reach

$
0
0

I recently waited for a flight in Newark Liberty International Airport’s Terminal C. Expecting the typical travelling experience of delayed departures and time wasted waiting at the gate, I was surprised to find that the poorly rated airport had implemented a massive upgrade to their customer experience by installing iPads at every seat in the terminal.

AccessatArmsReach1

This upgrade, courtesy of the innovative company OTG, drastically changes the travelling experience by providing a means for all passing travelers to monitor their flight status, browse the web, and even order food straight to their seat and pay for it with a built-in credit card reader. Experiencing this new way to kill time in an airport terminal made me consider the effect of minimizing the gaps between customers and satisfaction. 

In the past, grabbing a quick bite to eat between connecting flights has rarely been more appetizing than a stop at a magazine stand or fast food counter. A stressed traveler pressed for time will sacrifice quality for ease of access (a choice familiar to everyone who has ever taken a road trip on the interstate highways). And keeping yourself occupied while waiting for a flight (and eating your pre-packaged turkey wrap) usually involves a frustrating interaction with unreliable airport Wi-Fi or searching for an available charging station. With the installation of new, self-serve technology (and free Internet access, including your flight’s gate number and boarding time), a larger range of quality choices has become easily accessible to customers. Any traveler can pick an open seat and gain instant access to the range of services provided by United.

AccessatArmsReach2

This self-serve method has the power to change what was previously a dreaded experience into something like a casual stop in the Apple store, all while minimizing the stress of flying. Users of all ages can walk up to an inviting interface and explore what it has to offer. By providing a commitment-free way to spend time at arm’s reach, United has demonstrated that they understand some of our fundamental needs as airport users and actually improved our experience.

All you need to know about the .NET Foundation

$
0
0

Microsoft’s .NET Foundation has now been running for around two years, after its initial announcement by Scott Guthrie at the Build 2014 conference (highlights of which can be found here). The .NET Foundation was created to foster the open development of the .NET ecosystem, and to use community participation and rapid innovation to help fortify it. But, for that to sound like any kind of impressive feat, we first need to understand what the .NET Framework is.

What is .NET?

The .NET Framework began its life as a proprietary framework for building Windows apps, but Microsoft changed the license model to more closely follow a contemporary community-developed open source type project.

Almost all .NET framework applications, components and controls are built using Microsoft’s Frame Class Library (FCL) as a basis. One of the big selling points of .NET  is its language interoperability– a feature which allows code to be written in different languages. The primary language used in .NET applications is C#, but other popular options include:

  • VB.NET
  • J#
  • F#

Microsoft has of course deeply accommodated .NET into their integrated development environment (IDE) Visual Studio. As with many of Microsoft products, if you go all in you get many benefits. Using C#, .NET and Visual Studio together offers a host of benefits.

The .NET Foundation

The .NET Foundation is an independent organization created to adopt the open development and collaboration around the Microsoft .NET Framework. It serves as a forum for both community and commercial developers to broaden and strengthen the future of the .NET environment by promoting openness and community participation to encourage innovation.

All contributions have standard open source licenses and a lack of platform restrictions means users have the ability to run it on any platform. Users are also able to integrate suggestions & submissions from other developers. It was heralded as the next step in terms of open source; the umbrella for all these projects to be contributed under, and the foundation on which projects and code can be contributed into open source.

Some of the founding contributions to be included under the .NET Foundation are:

  • ASP.NET
  • Xamarin
  • “Roslyn”
  • Microsoft Azure SDK
  • Windows Phone Toolkit

 

Who’s in it?

Projects that are currently under the stewardship of the .NET Foundation include the .NET compiler platform (AKA “Roslyn”) – which includes hosting for C# and Visual Basic (VB) .NET languages. Both are available via the traditional command-line programs. Roslyn exposes modules for the syntactic analysis of code and code emission.

The ASP.NET family (along with Roslyn) were open sourced by MS Open Technologies. ASP.NET was founded in 2002 along with the first iteration of the .NET Framework, and is built on the Common Language Runtime (CLR). This means programmers can write for ASP.NET using any supported .NET language.

Xamarin has contributed several open source projects to the .NET Framework, including the Xamarin.Mobile and Xamarin.Auth APIs, as well as the very popular Mailkit and Mimekit projects.

How does it help?

The .NET Foundation supports .NET open source in a number of different ways. It offers the benefits of the .NET platform to the wider community of developers, and promotes the benefits of the open source model to developers already using .NET. The Foundation also provides administration and support for multiple .NET open source projects assigned to it. New .NET projects joining the foundation can receive mentorship and access to current developers working with .NET open source projects. The foundation also works with Microsoft and the broader industry in attempts to increase the exposure of open source projects in the community.

Services for .NET Foundation Projects include:

  • Project Guidance and Mentoring
  • IP and Legal
  • Technical Support
  • Marketing and Communications
  • Financial Support

Get involved!

The .NET Foundation is always looking for involvement from the community, whether it’s contributing to a project, new submissions or just simply spreading the word. Join in on community conversations regarding the Foundation on the community forum.

Interested in discovering more? Find out everything there is to know on the .NET Foundation - and how you can make a difference - on their website. You can also follow their blog and Twitter page, or check them out on GitHub.

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.

How to Impress Your Customers with Better Data UX Design

$
0
0

We can all agree that a good user experience (UX) is core to retaining customers to your product. People’s leniency towards below-average UX is continually declining, and they take no shame in deleting an app just moments after downloading if it’s slow to load or unintuitive to use. However, the terminology regarding what makes for good data UX design is somewhat clouded. In this post we’ll explore what exactly determines ‘good UX’ but first let’s look at what it means (and what it takes) to impress your customer.

According to research, it takes all of 7 seconds for us to make our first impressions. As shallow as this may make us seem as humans, we’re of the notion that first impressions are second fiddle to the long-term impression you leave on your audience. Being able to retain your audience’s attention during a presentation, for example, is good – but having them discuss it a week down the line is far more impressive. The goal should always be to get you audience, customer, or whoever it may be thinking in their own time – be that because their experience was particularly appealing or intuitive, it struck a chord with them from the way it was displayed, or because of the content they viewed. Or, even better, all of the above!

First steps

So what can you do to make your UX memorable, to make it stand out? The first step is understanding exactly what is meant by UX design. From the words of experts in the field:

“UX is the process of designing (be that digital or physical) products that are useful, easy to use and delightful to interact with.”

Taking from that, UX is essentially the experience that user has when they interact with your product. So, by definition, UX design is about the decisions that are made regarding how the user and the product will interact. So, good data UX happens when these decisions are informed and a high level of consideration and time is taken over them, resulting in the data you’re displaying having a bigger impact on your audience.

Data to dashboard

A good data visualization will provide a unique or striking insight into a set of data, drawing up interesting comparisons or differences to show to your customer or viewer. A dashboard is a combination of multiple visualizations on one screen, and so opens up even more possibilities for comparing and contrasting data. Dashboards are a fantastic way to impress your audience, and when combined with an intuitive user experience will ensure to grab their attention.

One example of this is social analytics statistics. Coming with the rise of social media in the enterprise, social analytics are becoming increasingly pertinent and popular methods of data analysis. Modern Business Intelligence (BI) tools offer templates for analytics that are popular with the community – be it the latest Facebook, Google or Twitter statistics.

Don’t get caught out

Creating a fluid and appealing user experience for your data is extremely important, but just as important is the actual data you put in. So, when it comes to the actual content of your dashboard, there are a few things to be wary of.

It’s not all about numbers

With such vast quantities of data of different shapes and sizes out there, it’s easy to get lost in the numbers. It’s even more easy, however, to end up over-relying on those numbers. Rolling the behavior of millions into a single number is not always a good thing. Even organized sets of data don’t answer questions in regards to data-driven UX design. These gaps can be filled by qualitative insights or the lesser-known “thick data” - data that provides insights into the everyday lives of consumers, explaining why they have certain preferences and the reasons they behave the way they do.

Bigger isn’t always better

Sometimes, bigger is better. When dealing with something subjective, the more responses you can accumulate the better, as it provides greater accuracy and gives you more confidence in your results. However, for some analyses, sheer volume is not the sole answer, and metricssuch as variety can also be very important. More diverse sources create a more nuanced picture which can better encapsulate your findings. Perhaps a better analogy is broader is always better.

Objectively speaking

As effective as data can be in expressing your findings, always remember that ‘it’s not gospel’. Datasets are created by humans, who interpret them and assign meaning to them. There are both limitations and at least some level of bias in every type of data, but good data should describe its biases and always provide context.

Data and design on one screen

With such ample data out there, making data-driven decisions has never been easier. ReportPlus, from Infragistics, combines the power of data analysis with the appeal of sleek user design in one enterprise tool. Available on major mobile platforms iOS and Android, and soon on dekstop and web, users can create the latest compelling data visualizations and share key metrics with their teams, wherever they happen to be.

Try ReportPlus free today and begin turning dead data into dynamic decision making.

It's OK not to lead

$
0
0

When I first entered the workforce, I was in awe of the programmers around me.  I'd spent 4 years of college learning how to implement Alpha-Beta pruning and various flavors of sort(), while these guys had been building things that real people used in the real world for years, or even decades.  I imagine it was, on a much smaller and more cerebral scale, the way a college football player feels upon turning pro.

This didn't last.  After a while, I viewed them less with reverence and more as simple peers from whom I could learn, given their comparable wealth of experience.  As the years passed, the scales began to balance as I acquired more and more experience.  I was no longer the greenest developer around, and there was a healthy mix of people more and less experienced than I was.

The Path to Leadership

As this transformation took place, I started to develop opinions and preferences of my own, based on experience and on becoming a student of the craft.  I began to read books by people like Uncle Bob Martin and Michael Feathers.  And, sometimes, these opinions I was developing didn't line up with those of some of the people around me.  I started to notice that the practices advocated by industry thought leaders were dismissed or ignored by some of the people around me.

At the same time, I began to notice that the people making technical decisions from positions with names like, "Architect," "Principal Software Engineer," and "Team Lead," weren't necessarily the best, technically.  Often, these leadership roles seemed to be as much a function of number of years with the company and familiarity with the domain as they were a function of technical acumen.  And where the developers more experienced than me seemed diverse in their skill, philosophy and approach, the decision-makers seemed disproportionately to value a "crank out reams of code" approach.  And, of course, when you have differences of opinion with the decision-makers, it doesn't tend to go your way.

As I grew in my career and moved around a bit, this philosophical friction with entrenched technical leaders led me to the conclusion that the path to joy was to become a company's technical decision maker.  I'm also an ambitious, overachiever sort, so wanting to 'ascend' to a leadership position fell in line with my goals anyway.  I'd work hard and bide my time until I earned a leadership position, and would remain unsatisfied with my lot in life until that time.

Leadership Realized

There was a happy-ish ending to the first part of this story.  I found my way into leadership, through a combination of hard work, years in the field, and moving around some.  I was finally in a position where I owned the decision making.  I was no longer subject to determinations that the best approach to a public API was one giant class with hundreds of static methods.  I could preside over a clean codebase and a clean approach.

And, I did.  Things went well.  But there was a hidden irony to this which was that, as I acquired more leadership responsibilities, I dealt less with actual implementation.  I had all of the freedom in the world to make technical decisions, and no time with which to make them.  I delegated the work, for the most part, and trusted team members to do the right thing.

What I finally realized was that what I wanted wasn't actually to be the lead, per se.  By nature, I'm nothing resembling a micromanager, and I was't looking to be the person that made decisions for groups.  I was just looking not to have that done to me.  I didn't want to be micromanaged.  It took a lot of years, a number of promotions, and a wide variety of roles to understand this.

Team Member Is Just Fine

I was working as a CIO when I had this epiphany.  I had the IT and software development organizations reporting to me, and writing code was something I did in the evenings in order to keep my skills sharp.  Some serious reflection and evaluation of my situation led me to back away from that role and strike out on my own.

I became a freelancer, doing a mixture of application development, coaching, and IT management consulting.  The result was that I got to return to situations where I was closer to or directly involved with actually building stuff, but where I still was not being micromanaged.  And, I was quite happy with that.

But another interesting thing happened during this time, which was the departure of any remaining feeling that my role on a team represented achievement or rank.  The formation of a software team isn't as simple or obtuse as "the best, most qualified person is the leader."  I could choose to plug into a team to lend a hand and take technical direction from someone else, without it somehow lessening my standing or the perception of my competence.

This was a huge and surprising relief.  You see, being a team or department leader can be exhausting.  The buck stops with you and so you only get a small grace period where your answer can be "I don't know."  There's a point where anyone on the team can say, "gosh, that's just over my head -- I need help."  As the leader and closer, that's where you step in and handle it.  And, that's a lot of pressure.

I'd love to be able to give my younger self this message, but I'll have to settle for giving it to everyone reading.  You can find good teams, run by folks that aren't micro-managers.  You can put yourself in situations where you're empowered to make good decisions.  If you want leadership positions and to follow that path with your career, by all means, do it.  But understand that it's not a better position nor is it an indicator of alpha status and competence.  Pitching in, writing code, and contributing to a team is a great way to spend a career.   It's perfectly okay not to lead.

Deliver the most demanding and beautiful touch-friendly dashboards for desktop and mobile apps with over 75 HTML5 charts, gauges, financial charts, statistical and technical indicators, trend lines and more. Download Ignite UI now and experience the power of Infragistics jQuery controls.

The Transition Effect – Animations in our UI

$
0
0

Essential to the process of creating and developing engaging interfaces for our users are thoughtful animations. In conjunction with wireframe testing and validation, we also need to be considering the overall design language and aesthetics. There are so many things to consider. As designers we want clean typography for the content, a beautiful color palette, expressive yet immediately understandable iconography but this isn’t enough. A successful product is one that can connect to the user on a personal level. Animations make this connection.

It’s only natural, as humans, to expect motion in our lives. I pull out my desk chair and it rolls. Drop a basketball and watch it bounce. These expectations, whether we realize it or not, are in our minds when interacting with our apps.

Communicate Status and Provide Feedback

Great transitions have meaning; we don’t want to water down our products with needless animations. These will distract users from the content. For example, a meaningful transition may occur after a user completes a form, or to notify a user that the information they submitted was received by the app.

*created with Indigo Studio

Sure, you could just have the confirmation appear over the login form, but it would feel forced and unnatural. The result would require attention and place an additional, albeit small, cognitive load on the user in order to make the connection between their action and what happened on their screen. By having a well thought out transition, the user is guided on a journey from one visual state (the sign up form) to the next visual state (confirmation message), minimizing attentional and cognitive requirements.

Direct the User’s Attention

Transitions may also be used to direct your user’s attention to certain aspects of the content. In doing so, you help the user focus on what is relevant as they complete their task.

In this example, as the side panel transitions in, the main content animates to the lower right corner of the viewport. This puts the user’s attention solely on the content that is in the navigation drawer, allowing them to focus on what is important in this visual state. When closing the navigation panel, notice how the content transitions to its original position. This technique also help users remember the “resting” location of the navigation.

Be smart about your transitions and don’t be afraid to experiment. Test with users to validate ideas. Remember to have your transitions guide the user from one visual state to the next without distracting them from the content. When returning back to a visual state (ie: closing a menu), all elements should animate back to their original locations. You’ll learn that by providing transitions that delight, users will feel a greater connection with your app.

How to Create a Custom Action Filter in ASP.NET MVC

$
0
0

In ASP.NET MVC, Filters are used to inject logic at different levels of request processing and allow us to share logics across Controllers. For example, let’s say we want to run a security logic or a logging logic across the controller. To do so, we’ll write a filter containing those logics and enable them across all controllers. When we enable a filter across all controllers or actions, the filter enables the upcoming HTTP request.

Let us consider a scenario of Logging: for every incoming request, we need to log some data to the files on the basis of some logic. If we don’t create this logic inside a custom filter, then we will have to write logic for each controller’s action. This mechanism will lead to two problems:

  1. duplication of code; and
  2. violation of the Single Responsibility Principles; actions will now perform additional tasks of logging.

We can mitigate the problems above by putting the logging logics inside a custom action filter and applying the filter at all the controllers’ level.

Have you ever come across source code as shown in the image below? [Authorize] is an Authorization filter, and it gets executed before any HTPP request or Action method execution.  The Authorize filter is part of MVC, but if needed, we can create a custom filter too.

In ASP.NET MVC, there are four types of filters:

  1. Authentication Filter  
  2. Authorization Filter
  3. Action Filter
  4. Result Filter
  5. Exception Filter

The sequence of running of various filters are as follows:

  • The Authentication filter runs before any other filter or action method
  • The Authorization filter runs after the Authentication filter and before any other filter or action method
  • The Action filter runs before and after any action method
  • The Result filter runs before and after execution of any action result
  • The Exception filter runs only if action methods, filters or action results throw an exception

In a diagram, we can depict the sequence of filter execution as shown below:

Each filter has its own purposes, however most of the time you will find yourself writing a Custom Action Filter. They get executed before and after execution of an action.

Custom Action Filter

We write custom action filters for various reasons. We may have a custom action filter for logging, or for saving data to database before any action execution. We could also have one for fetching data from the database and setting it as the global values of the application. We can create custom action filters for various reasons including but not limited to:

  • Creating a privileged authorization
  • Logging the user request
  • Pre-processing image upload
  • Fetching data to display in the layout menu
  • Localization of the application
  • Reading browser user agent information to perform a particular task
  • Caching, etc.

To create a custom action filter, we need to perform the following tasks:

  1. Create a class
  2. Inherit it from ActionFilterAttribute class
  3. Override at least one of the following methods:
  • OnActionExecuting– This method is called before a controller action is executed.
  • OnActionExecuted– This method is called after a controller action is executed.
  • OnResultExecuting– This method is called before a controller action result is executed.
  • OnResultExecuted– This method is called after a controller action result is executed.

Let us create a custom action filter which will perform two tasks, in the most simplistic way. Of course you can write more sophisticated code inside the custom action filter, but we are going to create a custom filter with the name MyFirstCustomFilter, which will perform the following two tasks:

  1. Set some data value in global ViewBag.
  2. Log the incoming request to the controller action method.

The filter can be created as shown in the listing below:

usingSystem;usingSystem.Diagnostics;usingSystem.Web.Mvc;namespaceWebApplication1
{publicclassMyFirstCustomFilter : ActionFilterAttribute
    {publicoverridevoidOnResultExecuting(ResultExecutingContext filterContext)
        {//You may fetch data from database here 
            filterContext.Controller.ViewBag.GreetMesssage = "Hello Foo";base.OnResultExecuting(filterContext);
        }publicoverridevoidOnActionExecuting(ActionExecutingContext filterContext)
        {var controllerName = filterContext.RouteData.Values["controller"];var actionName = filterContext.RouteData.Values["action"];var message = String.Format("{0} controller:{1} action:{2}", "onactionexecuting", controllerName, actionName);
            Debug.WriteLine(message, "Action Filter Log");base.OnActionExecuting(filterContext);
        }
    }
}

In the above listing, we are simply setting ViewBag property for the controllers being executed. The ViewBag property will be set before a controller action result is executed, since we are overriding the OnResultExecuting method. Also, we are overriding OnActionExecuting to log the information about controller’s action method.

So now we’ve created the custom action filter. Now we can apply that at three possible levels:

  • As a Global filter
  • At a Controller level
  • At an Action level

Applying as a Global Filter

We can apply a custom filter at a global level by adding a filter to the global filter in App_Start\FilterConfig. Once added at the global level, the filter would be available for all the controllers in the MVC application.

publicclassFilterConfig
    {publicstaticvoidRegisterGlobalFilters(GlobalFilterCollection filters)
        {
            filters.Add(new HandleErrorAttribute());
            filters.Add(new MyFirstCustomFilter());
        }
    }

Filter at a Controller level

To apply a filter at the controller level, we can apply it as an attribute to a particular controller. When applied as controller level, the action would be available to all the actions of the particular controller. We can apply MyFirstCustomFilter to HomeController as shown in the listing below:

[MyFirstCustomFilter]publicclassHomeController : Controller
    {public ActionResult Index()
        {returnView();
        }
     }

Filter at an Action level

Finally, to apply a filter at a particular action, we can apply it as an attribute of the Action as shown in the listing below:

[MyFirstCustomFilter]public ActionResult Contact()
        {
            ViewBag.Message = "Your contact page.";returnView();
        }

And that’s about it for custom action filters! I hope you find this post useful, and thanks for reading. Have something to add? Feel free to leave a comment!

How to Implement the Repository Pattern in ASP.NET MVC Application

$
0
0

The Repository Pattern is one of the most popular patterns to create an enterprise level application. It restricts us to work directly with the data in the application and creates new layers for database operations, business logic and the application’s UI. If an application does not follow the Repository Pattern, it may have the following problems:

  • Duplicate database operations codes
  • Need of UI to unit test database operations and business logic
  • Need of External dependencies to unit test business logic
  • Difficult to implement database caching, etc.

Using the Repository Pattern has many advantages:

  • Your business logic can be unit tested without data access logic;
  • The database access code can be reused;
  • Your database access code is centrally managed so easy to implement any database access policies, like caching;
  • It’s easy to implement domain logics;
  • Your domain entities or business entities are strongly typed with annotations; and more.

On the internet, there are millions of articles written around Repository Pattern, but in this one we’re going to focus on how to implement it in an ASP.NET MVC Application. So let’s get started!

Project Structure

Let us start with creating the Project structure for the application. We are going to create four projects:

  1. Core Project
  2. Infrastructure Project
  3. Test Project
  4. MVC Project

Each project has its own purpose. You can probably guess by the projects’ names what they’ll contain: Core and Infrastructure projects are Class Libraries, Web project is a MVC project, and Test project is a Unit Test project. Eventually, the projects in the solution explorer will look as shown in the image below:

As we progress in this post, we will learn in detail about the purpose of each project, however, to start we can summarize the main objective of each project as the following:

So far our understanding for different projects is clear. Now let us go ahead and implement each project one by one. During the implementations, we will explore the responsibilities of each project in detail.

 

Core Project

In the core project, we keep the entities and the repository interfaces or the database operation interfaces. The core project contains information about the domain entities and the database operations required on the domain entities. In an ideal scenario, the core project should not have any dependencies on external libraries. It must not have any business logic, database operation codes etc.

In short, the core project should contain:

  • Domain entities
  • Repository interfaces or database operations interfaces on domain entities
  • Domain specific data annotations

The core project can NOT contain:

  • Any external libraries for database operations
  • Business logic
  • Database operations code

While creating the domain entities, we also need to make a decision on the restrictions on the domain entities properties, for example:

  • Whether a particular property is required or not. For instance, for a Product entity, the name of the product should be required property.
  • Whether a value of a particular property is in given range or not. For instance, for a Product entity, the price property should be in given range.
  • Whether the maximum length of a particular property should not be given value. For instance, for a Product entity, the name property value should be less than the maximum length.

There could be many such data annotations on the domain entities properties. There are two ways we can think about these data annotations:

  1. As part of the domain entities
  2. As part of the database operations logic

It is purely up to us how we see data annotations. If we consider them part of database operation then we can apply restrictions using database operation libraries API. We are going to use the Entity Framework for database operations in the Infrastructure project, so we can use Entity Framework Fluent API to annotate data.

If we consider them part of domain, then we can use System.ComponentModel.DataAnnotations library to annotate the data. To use this, right click on the Core project’s Reference folder and click on Add Reference. From the Framework tab, select System.ComponentModel.DataAnnotations and add to the project.

We are creating a ProductApp, so let us start with creating the Product entity. To add an entity class, right click on the Core project and add a class, then name the class Product.

usingSystem.ComponentModel.DataAnnotations;namespaceProductApp.Core
{publicclassProduct
    {publicint Id { get; set; } [Required] [MaxLength(100)]publicstring Name { get; set; } [Required]publicdouble Price { get; set; }publicbool inStock { get; set; }
    }
}

We have annotated the Product entity properties with Required and MaxLength. Both of these annotations are part of System.ComponentModel.DataAnnotations. Here, we have considered restriction as part of the domain, hence used data annotations in the core project itself.

We have created Product Entity class and also applied data annotation to that. Now let us go ahead and create Repository interface. But before we create that, let us understand, what is a Repository Interface?

The repository interface defines all the database operations possible on the domain entities. All database operations that can be performed on the domain entities are part of the domain information, hence we will put the repository interface in the core project. How these operations can be performed will be the part of Infrastructure project.

To create a Repository Interface, right click on the Core project and add a folder named Interfaces. Once the Interfaces folder is created, right click on the Interface folder and select add a new item, then from the Code tab select Interface. Name the Interface IProductRepository

usingSystem.Collections.Generic;namespaceProductApp.Core.Interfaces
{publicinterface IProductRepository
    {voidAdd(Product p);voidEdit(Product p);voidRemove(int Id);
        IEnumerable GetProducts(); Product FindById(int Id); } } 

Now we have created a Product entity class and a Product Repository Interface. At this point, the core project should look like this:

Let us go ahead and build the core project to verify everything is in place and move ahead to create Infrastructure project.

 

Infrastructure Project

Main purpose of Infrastructure project is to perform database operations. Besides database operations, it can also consume web services, perform IO operations etc. So mainly, Infrastructure project may perform the following operations:

  • Database operations
  • Working with WCF and Web Services
  • IO operations

We can use any database technology to perform database operations. In this post we are going to use Entity Framework. So we are going to create database using the Code First approach. In the Code First approach, database gets created on basis of the classes. Here database will be created on the basis of the Domain entities from the Core Project.

To create the database from the Core project domain entity, we need to perform these tasks:

  1. Create DataContext class
  2. Configure the connection string
  3. Create DataBase Initalizer class to seed data in the database
  4. Implement IProductRepsitory interface

 

Adding References

First let’s add references of the Entity Framework and ProductApp.Core project. To add the Entity Framework, right click on the Infrastructure project and click on Manage Nuget Package. In the Package Manager Window, search for Entity Framework and install the latest stable version.

To add a reference of the ProductApp.Core project, right click on the Infrastructure project and click on Add Reference. In the Reference Window, click on the Project tab and select ProductApp.Core.

DataContext class

The objective of the DataContext class is to create the DataBase in the Entity Framework Code First approach. We pass a connection string in the constructor of DataContext class. By reading the connection string, the Entity Framework create the database. If a connection string is not specified then the Entity Framework creates the database in a local database server.

In the DataContext class:

  • Create a DbSet type property. This is responsible for creating the table for the Product entity
  • In the constructor of the DataContext class, pass the connection string to specify information to create database, for example server name, database name, login information etc. We need to pass name of the connection string. name where database would be created
  • If connection string is not passed, Entity Framework creates with the name of data context class in the local database server.
  • ProductDataContext class inherits the DbContext class

 The ProductDataContext class can be created as shown in the listing below:

usingProductApp.Core;usingSystem.Data.Entity;namespaceProductApp.Infrastructure
{publicclassProductContext  : DbContext
    {publicProductContext()
           : base("name=ProductAppConnectionString")
       {
       }public DbSet Products { get; set; } } } 

Next we need to work on the Connection String. As discussed earlier, we can either pass the connection string to specify database creation information or reply on the Entity Framework to create default database at default location for us. We are going to specify the connection string that is why, we passed a connection string name ProductAppConnectionString in the constructor of ProductDataContext class. In the App.Config file the ProductAppConnectionString connection string can be created as shown in the listing below:

<addname="ProductAppConnectionString"connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=ProductAppJan;Integrated Security=True;MultipleActiveResultSets=true"providerName="System.Data.SqlClient"/>

Database Initializer class

We create a database initializer class to seed the database with some initial value at time of the creation. To create the Database initializer class, create a class which inherits from DropCreateDatabaseIfModelChnages. There are other options of classes available to inherit in order to create a database initializer class. If we inherit DropCreateDatabaseIfModelChnages class then each time a new database will be created on the model changes. So for example, if we add or remove properties from the Product entity class, Entity Framework will drop the existing database and create a new one. Of course this is not a great option, since data will be lost too, so I recommend you explore other options to inherit the database initializer class.

The database initializer class can be created as shown in the listing below. Here we are seeding the product table with two rows. To seed the data:

  1. Override Seed method
  2. Add product to Context.Products
  3. Call Context.SaveChanges()
usingProductApp.Core;usingSystem.Data.Entity;namespaceProductApp.Infrastructure
{publicclassProductInitalizeDB : DropCreateDatabaseIfModelChanges { protectedoverridevoidSeed(ProductContext context) { context.Products.Add(new Product { Id = 1, Name = "Rice", inStock = true, Price = 30 }); context.Products.Add(new Product { Id = 2, Name = "Sugar", inStock = false, Price = 40 }); context.SaveChanges(); base.Seed(context); } } } 

So far, we have done all the Entity Framework Code First related work to create the database. Now let’s go ahead and implement IProductRepository interface from the Core project in a concrete ProductRepository class.

 

Repository Class

This is the class which will perform database operations on the Product Entity. In this class, we will implement the IProductRepository interface from the Core project. Let us start with adding a class ProductRepository to the Infrastructure project and implement IProductRepository interface. To perform database operations, we are going to write simple LINQ to Entity queries. ProductRepositry class can be created as shown in the listing below:

usingProductApp.Core.Interfaces;usingSystem.Collections.Generic;usingSystem.Linq;usingProductApp.Core;namespaceProductApp.Infrastructure
{publicclassProductRepository : IProductRepository
    {
        ProductContext context = new ProductContext();publicvoidAdd(Product p)
        {
            context.Products.Add(p);
            context.SaveChanges();
        }publicvoidEdit(Product p)
        {
            context.Entry(p).State = System.Data.Entity.EntityState.Modified;
        }public Product FindById(int Id)
        {var result = (from r in context.Products where r.Id == Id select r).FirstOrDefault();return result;
        }public IEnumerable GetProducts() { return context.Products; } publicvoidRemove(int Id) { Product p = context.Products.Find(Id); context.Products.Remove(p); context.SaveChanges(); } } } 

So far we have created a Data Context class, a Database Initializer class, and the Repository class. Let us build the infrastructure project to make sure that everything is in place. The ProductApp.Infrastructure project will look as given in the below image:

 

Now we’re done creating the Infrastructure project. We have written all the database operations-related classes inside the Infrastructure project, and all the database-related logic is in a central place. Whenever any changes in database logic is required, we need to change only the infrastructure project.

 

Test Project

The biggest advantage of Repository Pattern is the testability. This allows us to unit test the various components without having dependencies on other components of the project. For example, we have created the Repository class which performs the database operations to verify correctness of the functionality, so we should unit test it. We should also be able to write tests for the Repository class without any dependency on the web project or UI. Since we are following the Repository Pattern, we can write Unit Tests for the Infrastructure project without any dependency on the MVC project (UI).

To write Unit Tests for ProductRepository class, let us add following references in the Test project.

  1. Reference of ProductApp.Core project
  2. Reference of ProductApp.Infrastructure project
  3. Entity Framework package

 

To add the Entity Framework, right click on the Test project and click on Manage Nuget Package. In the Package Manger Windows, search for Entity Framework and install the latest stable version.

To add a reference of the ProductApp.Core project, right click on the Test project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Core.

To add a reference of the ProductApp.Infrastructure project, right click on the Test project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Infrastructure.

Copy the Connection String

Visual Studio always reads the config file of the running project. To test the Infrastructure project, we will run the Test project. Hence the connection string should be part of the App.Config of the Test project. Let us copy and paste the connection string from Infrastructure project in the Test project.

We have added all the required references and copied the connection string. Let’s go ahead now and set up the Test Class. We’ll create a Test Class with the name ProductRepositoryTest. Test Initialize is the function executed before the tests are executed. We need to create instance of the ProductRepository class and call the ProductDbInitalize class to seed the data before we run tests. Test Initializer can be written as shown in the listing below:

[TestClass]publicclassProductRepositoryTest
    {
        ProductRepository Repo;  [TestInitialize]publicvoidTestSetup()
        {
            ProductInitalizeDB db = new ProductInitalizeDB();
            System.Data.Entity.Database.SetInitializer(db);
            Repo = new ProductRepository();
        }
    }

Now we’ve written the Test Initializer. Now let write the very first test to verify whether ProductInitalizeDB class seeds two rows in the Product table or not. Since it is the first test we will execute, it will also verify whether the database gets created or not. So essentially we are writing a test:

  1. To verify database creation
  2. To verify number of rows inserted by the seed method of Product Database Initializer
[TestMethod]publicvoidIsRepositoryInitalizeWithValidNumberOfData()
        {var result = Repo.GetProducts();
            Assert.IsNotNull(result);var numberOfRecords = result.ToList().Count;
            Assert.AreEqual(2, numberOfRecords);
        }

As you can see, we’re calling the Repository GetProducts() function to fetch all the Products inserted while creating the database. This test is actually verifying whether GetProducts() works as expected or not, and also verifying database creation. In the Test Explorer window, we can run the test for verification.

To run the test, first build the Test project, then from the top menu select Test->Windows-Test Explorer. In the Test Explorer, we will find all the tests listed. Select the test and click on Run.

Let’s go ahead and write one more test to verify Add Product operation on the Repository:

 [TestMethod]publicvoidIsRepositoryAddsProduct()
        {
            Product productToInsert = new Product
            {
                Id = 3,
                inStock = true,
                Name = "Salt",
                Price = 17

            };
            Repo.Add(productToInsert);
            // If Product inserts successfully, //number of records will increase to 3 var result = Repo.GetProducts();var numberOfRecords = result.ToList().Count;
            Assert.AreEqual(3, numberOfRecords);
        }

To verify insertion of the Product, we are calling the Add function on the Repository. If Product gets added successfully, the number of records will increase to 3 from 2 and we are verifying that. On running the test, we will find that the test has been passed.

In this way, we can write tests for all the Database operations from the Product Repository class. Now we are sure that we have implemented the Repository class correctly because tests are passing, which means the Infrastructure and Core project can be used with any UI (in this case MVC) project.

 

MVC or Web Project

Finally we have gotten to the MVC project! Like the Test project, we need to add following references

  1. Reference of ProductApp.Core project
  2. Reference of ProductApp.Infrastructure project

To add a reference of the ProductApp.Core project, right click on the MVC project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Core.

To add a reference of the ProductApp.Infrastructure project, right click on the MVC project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Infrastructure.

 

Copy the Connection String

Visual Studio always reads the config file of the running project. To test the Infrastructure project, we will run the Test project, so the connection string should be part of the App.Config of the Test project. To make it easier, let’s copy and paste the connection string from Infrastructure project in the Test project.

 

Scaffolding the Application

We should have everything in place to scaffold the MVC controller. To scaffold, right click on the Controller folder and select MVC 5 Controller with Views, using Entity Framework as shown in the image below:

Next we will see the Add Controller window. Here we need to provide the Model Class and Data context class information. In our project, model class is the Product class from the Core project and the Data context class is the ProductDataContext class from the Infrastructure project. Let us select both the classes from the dropdown as shown in the image below:

Also we should make sure that the Generate Views, Reference script libraries, and Use a layout page options are selected.

On clicking Add, Visual Studio will create the ProductsController and Views inside Views/Products folder. The MVC project should have structure as shown in the image below:

At this point if we go ahead and run the application, we will be able to perform CRUD operations on the Product entity.

Problem with Scaffolding

But we are not done yet! Let’s open the ProductsController class and examine the code. On the very first line, we will find the problem. Since we have used MVC scaffolding, MVC is creating an object of the ProductContext class to perform the database operations.

Any dependencies on the context class binds the UI project and the Database tightly to each other. As we know the Datacontext class is an Entity Framework component. We do not want the MVC project to know which database technology is being used in the Infrastructure project. On the other hand, we haven’t tested the Datacontext class; we’ve tested the ProductRepository class. Ideally we should use ProductRepository class instead of the ProductContext class to perform database operations in the MVC Controller.  To summarize,

  1. MVC Scaffolding uses Data context class to perform database operations. The data context class is an Entity Framework component, so its uses tightly couples UI (MVC) with the Database (EF) technology.
  2. The data context class is not unit tested so it’s not a good idea to use that.
  3. We have a tested ProductRepository class. We should use this inside Controller to perform database operations. Also, the ProductRepository class does not expose database technology to the UI.

To use the ProductRepository class for database operations, we need to refactor the ProductsController class. To do so, there are two steps we need to follow:

  1. Create an object of ProductRepository class instead of ProductContext class.
  2. Call methods of ProductRepository class to perform database operations on Product entity instead of methods of ProductContext class.

In the listing below, I have commented codes using ProductContext and called ProductRepository methods. After refactoring, the ProductController class will look like the following:

usingSystem;usingSystem.Net;usingSystem.Web.Mvc;usingProductApp.Core;usingProductApp.Infrastructure;namespaceProductApp.Web.Controllers
{publicclassProductsController : Controller
    {//private ProductContext db = new ProductContext();private ProductRepository db = new ProductRepository();public ActionResult Index()
        {//return View(db.Products.ToList());returnView(db.GetProducts());
        }public ActionResult Details(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }// Product product = db.Products.Find(id);
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        }public ActionResult Create()
        {returnView();
        } [HttpPost] [ValidateAntiForgeryToken]public ActionResult Create([Bind(Include = "Id,Name,Price,inStock")] Product product)
        {if (ModelState.IsValid)
            {// db.Products.Add(product);//db.SaveChanges();
                db.Add(product);returnRedirectToAction("Index");
            }returnView(product);
        }public ActionResult Edit(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        } [HttpPost] [ValidateAntiForgeryToken]public ActionResult Edit([Bind(Include = "Id,Name,Price,inStock")] Product product)
        {if (ModelState.IsValid)
            {//db.Entry(product).State = EntityState.Modified;//db.SaveChanges();
                db.Edit(product);returnRedirectToAction("Index");
            }returnView(product);
        }public ActionResult Delete(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        } [HttpPost, ActionName("Delete")] [ValidateAntiForgeryToken]public ActionResult DeleteConfirmed(int id)
        {//Product product = db.FindById(Convert.ToInt32(id));// db.Products.Remove(product);// db.SaveChanges();
            db.Remove(id);returnRedirectToAction("Index");
        }protectedoverridevoidDispose(bool disposing)
        {if (disposing)
            {//db.Dispose();
            }base.Dispose(disposing);
        }
    }
}

After refactoring, let’s go ahead and build and run the application – we should be able to do so and perform the CRUD operations.

Injecting the Dependency

Now we’re happy that the application is up and running, and it was created using the Repository pattern. But still there is a problem: we are directly creating an object of the ProductRepository class inside the ProductsController class, and we don’t want this. We want to invert the dependency and delegate the task of injecting the dependency to a third party, popularly known as a DI container. Essentially, ProductsController will ask the DI container to return the instance of IProductRepository.

There are many DI containers available for MVC applications. In this example we’ll use the simplest Unity DI container. To do so, right click on the MVC project and click Manage Nuget Package. In the Nuget Package Manager search for Unity.Mvc and install the package.

Once the Unity.Mvc package is installed, let us go ahead and open App_Start folder. Inside the App_Start folder, we will find the UnityConfig.cs file. In the UnityConfig class, we havr to register the type. To do so, open RegisterTypes function in UnityConfig class and register the type as shown in the listing below:

publicstaticvoidRegisterTypes(IUnityContainer container)
        {// TODO: Register your types here
            container.RegisterType(); } 

We have registered the type to Unity DI container. Now let us go ahead and do a little bit of refactoring in the ProductsController class.  In the constructor of ProductsController we will pass the reference of the repository interface. Whenever required by the application, the Unity DI container will inject the concrete object of ProductRepository in the application by resolving the type. We need to refactor the ProductsController as shown in the listing below:

publicclassProductsController : Controller
    {
        IProductRepository db;publicProductsController(IProductRepository db)
        {this.db = db;
        }

Let us go ahead and build and run the application. We should have the application up and running, and we should able to perform CRUD operations using Repository Pattern and Dependency Injection!

Conclusion

In this article, we learned in a step by step manner how to create an MVC application following the Repository pattern. In doing so, we can put all the database logic in one place and whenever required, we only need to change the repository and test that. The Repository Pattern also loosely couples the application UI with the Database logic and the Domain entities and makes your application more testable.

I hope you found this post useful, thanks for reading!


How to use GitHub like a Pro

$
0
0

Let’s begin today’s post with a fact: As of right now (at the time of writing), GitHub has 21 million repositories and 9 million users - which isn’t too bad at all! For developers, GitHub offers an enormous range of tools, Wikis and information, so we want to help show you how to use it like a pro.

Before understanding what GitHub is, the first thing to do is understand what Git is. Git is an open source version control developed by the founder of the Linux, Linus Trovalds. Like any other version control, Git manages and stores the different versions of a project.

GitHub is developed over the Git version control so that it brings your project on the web for social networking and shares your code with other developers, inviting them to extend or improve your code. It provides collaboration features like wikis, project information, version releases, contributors to the repository, open and closed issues and more. All this allows developers to easily download the new version of an application, make changes and upload it back to the repository.

Git is a command based tool to perform version controlling for the source code. It also provides a graphical user interface which lets you contribute to projects. You can download the desktop version of GitHub here.

Getting started with GitHub

Repository– Repository is a directory or storage space where all your project related files are stored. This can include source code files, user documentation or installation documentation; everything can be stored in the repository. Each project will have its own repository along with a unique URL on GitHub. You can create either a Private repository - which is free and open source - or a Public repository, which is a paid version.

To create a repository, Go to the Repository tab and click on ‘Create new repository’, where you can then fill in details like the Repository name and description.

Forking– The process of Forking a repository lets you create a copy of an original project as a repository in your GitHub account, allowing you to work locally on the source code. You can make changes to the source code (such as fixing bugs) and commit these to your repository.

In order to fork a repository, you can navigate to the repository URL and click on ‘fork the repository’ to create a repository in your local account.

Commit– Commit is when an individual makes changes to source code. Every time you save code it creates a unique ID to identify what changes to the files were submitted for the particular commit. It also takes up the title and description of the commit to specify what changes were made and what this commit signifies.

Once you have forked a repository, you can make changes in the file. Let’s make changes to the readme.md file. I have updated the readme.md file by adding a 2nd line as shown below.

To update this file, go to the ‘commit changes’ section located at the bottom of the file and update the title of the commit as well as the description:

Upon clicking the 'Propose File Change' changes will be committed in the new branch.

Pull Request - Once you are done with making changes, you can submit a ‘Pull Request’ to the original project owner. If your fix or changes are approved after testing, he/she can pull your changes in the original project. All Pull Requests are managed from the self-titled tab which shows every Pull Request submitted by each contributor. It will compare the updated source code with the original source code, and will provide the list of files that are changed and committed. The owner of the project will have a comprehensive view of all the updated changes in each file, along with the comparison view.

Once you’re done committing changes in the new branch, you’ll be taken to a Pull Request screen to generate the pull request for your new file changes.

Click on the Create ‘Pull Request’ button. A Pull Request will be created and will be visible to the project owner under the pull request section.

Merging Pull Request– Once the changes in the Pull Request are reviewed and approved, now is the time to merge the changes in the original source code. In order to merge the request into the original repository, you need to have a push access on the repository.

The project owner can select the submitted Pull Request and review any changes from the ‘Files Changed’ tab. Clicking on each file will show you what has been changed, added or deleted.

Once the project owner verifies and approves the changes, the new project is ready to merge into the original project.

Click on ‘Merge pull request’ to merge it into the original branch.

GitHub Visual Studio Extension

GitHub is a powerful open source version control, and provides many capabilities through both Git console and the GitHub Desktop Version. But, like every source control, it requires direct integration into your Developer tool. Fortunately, Microsoft and GitHub recently announced the availability of GitHub enterprise in Azure, and they have also launched the GitHub Visual Studio Extension, allowing developers to easily connect and work with GitHub Projects directly from Visual Studio. Team Explorer support for Git allows you to do commits, branching and conflict resolution. You can download the extension here.

Why GitHub?

GitHub is not just another version control to keep track of changes. Instead it is a distributed version control which allows users to share code with developers across the globe.

Other key benefits of GitHub include:

 

  • Support from the Open Source Community
  • A distributed Version Control system
  • Social Networking features like Wikis
  • Manage code with multiple options
  • Show off! If you’ve created something fancy, GitHub provides the easiest way to share it with the Open Source Community.

GitHub also offers social networking capabilities to help you increase your network so that your code can be updated by various developers. As the saying goes, many hands make light work, and for that reason alone you should be using it.

Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics jQuery controls.

 

 

Developer Quotes: Edition 5

$
0
0

Check out these two pearls of wisdom. Tweet 'em, like 'em, tumble 'em... hey, even print them out and tape them to your coworker's monitor! Whatever works for you! These have just the right amount of sass, so I know you're gonna love 'em.

Share With The Code Below!

<a http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/7536.Jeff-Sickel.jpg"/> </a><br /><br /><br />Developer Quotes - Sickel <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>


Share With The Code Below!

<a http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/6518.language.jpg"/> </a><br /><br /><br />Developer Quotes - Language <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Keming, or the importance of not being a click.

$
0
0

This is Important

Let me start with some namedropping, to prove the importance of the subject. Apple just changed the kerning of the word “click“ on the El Capitan website (and tens of articles were published instantly on the topic), because the ‘c’ and ‘l’ were almost stuck together, forming a lovely ‘d’. And no marketing or PR department wants to deal with the consequences.

Subtle, but important kerning decision.

A small step for Apple, but a huge leap for kerning.

 

Get Comfortable with the Terms

KERNING, in short, is the distance between two letters in a word.

KEMING is a fake term to describe what happens when you don’t kern properly. Most often, two adjacent letters are placed so close to one another that they combine visually to form a third letter, sometimes making you sound like a click.

Keming as a term was first coined in 2008

Keming. n. The result of improper kerning. The term was first coined in 2008 by David Friedman.

 

Where Do We See Keming?

It’s everywhere - on websites, packaging, custom lettering, posters, menus, signs.

Most of the times it's just a small illegibility issue that bothers a handful of designers.

ST OP, in the name of love.

Often times, though, keming can cause confusion and mockery for your brand.

Spam Restaurant?img source

 

Or it can be really, really bad.

Again with this click...

 

Bonus: Diesel being Diesel, is being brave by breaking all the rules:

Diesel - Only the brave. image source

Now that we’ve seen that even Apple can make this mistake, here are some other beautiful examples we should watch out for:

  • FINAL
  • FLICK
  • pom
  • burn
  • pen is
  • therapist

Keming can produce bad results both when sticking the letters together(ex. burn = bum) and when making gaps in the middle of the word, separating it in two meaningful parts (ex. therapist = the rapist). Check out some more “fun“ images here and here.

Here are some kerning tips to improve the look and feel of your design and not frustrate marketing and PR.

 

How to Avoid Keming:

1. Use trusted fonts. Every font has built-in kerning, and the free, cheap, poorly designed fonts can play expensive tricks on you.

2. Whenever you have to use auto kerning, double and triple check the end result

3. For print. Make sure that when you send print files over to a printing house, all your fonts are in curves. Always.

4. For print. Adobe’s Creative Cloud has the “optical“ option for kerning letters that automatically "makes typography great again“. Learn more about kerning in Illustrator

5. For web. Use either automatic kerning with CSS, or go the extra mile and use lettering.js, kerning.js or a similar library

6. Check again. Read the whole text. Give it to as many people as possible to proofread.

 

Fun and Games

How keen is your eye when it comes to kerning?

Play "Kern Me"

And lastly

Show me you give a FLICK about kerning and share your thoughts & some examples in the comments section.

Responsive images on the web

$
0
0

According to http archive, the top 100 websites on the Internet today look something like this:

That’s right, exactly two thirds of websites in November 2015 are dominated by images. Website sizes are getting ever more bloated, and this can lead to slow load times and a poor UX, especially on mobile devices.

However, as we all know, a ‘picture paints a thousand words’ and clients are very keen on having image rich pages on their websites, regardless of the device used to view them. They understand that customers won’t read every word of text on their website, but images can create a far bigger impact in terms of perception and understanding of their brand. Naturally, they want those images to look great, run fast and fit to the parameters of any device.

So, images are getting ever more popular, yet they can damage User Experience. Responsive images are the solution here, and can play a big role in overcoming slow load times. However, as useful as they are, responsive images aren’t the easiest thing to implement. There are numerous methods of deploying them, yet each has its own limits and drawbacks.

There’s been a lot said about how to implement responsive images (see detailed guides here, here and here).  Today we’re going to overview responsive web design more generally, and think about the kinds of questions you need to ask when selecting your method.

The basic approach

The basic approach to making an image responsive is to set its maximum width at 100%. This means your image will always fit to the container around it and won’t ever exceed it either. This works for basic pages, yet if you have a number of different elements working together, you’ll soon run into problems. Other factors to take into account include:

  • Performance and bandwidth

The principal drawback of the ‘basic’ solution is that every device receives the same image. This is OK if your site is populated by small logos or images. However, if you’re using big pictures, sending these to devices which are connecting over limited mobile data connections will hold up load times considerably.

  • Art Direction

A second drawback with the ‘basic’ approach is that while images will fit on any device, they may lose their impact or power on each. The city vista or mountain view that looks great on a landscape desktop screen may look muddled or out of place on a smartphone. The message it’s trying to convey might be lost; you may prefer the mobile screen version to ‘zoom in’ on a certain aspect of the website image.

So, how can you overcome these issues with the basic approach?

Questions to ask

Before delving into solutions, you first need to decide which issue you want to solve. There are a range of solutions to the responsive image problem that have been proposed. However, each has its own specific strengths and limits. Some will help with certain issues; others will be stronger on others. It’s most important here to understand what your client is looking for; this should inform the solution you choose.

  • Is the problem one with art direction?

  • Does the client have a huge website where they want every image to become responsive?

  • Should all images load, or should they load dynamically via JavaScript?

  • Is testing the user’s bandwidth a priority - so you can see whether their connection can handle high-res images?

  • Can you make use of a third party solution, or do you need to keep is hosted in-house?

There are a lot of solutions out there that have been created to respond to the responsive design dilemma. The following are some of the most exciting solutions that have been developed:

1. PictureFill

PictureFill offers a very simple script that displays adaptive images at page loading time. It does require some JavaScript markup and doesn’t do any bandwidth detection, however.

2. HiSRC

A jQuery plugin that lets you create low, medium and high-res versions of an image. The script detects network speed and retina-readiness to show the most appropriate version. You will need to be using jQuery however, and it requires customer markup in the HTML.

3. Adaptive Images 

Adaptive Images is a largely server-side solution. You’ll need to install it (which can take a good while). However, once installed and configured, the PHP script will resize any image for you. It doesn’t, however, detect bandwidth and doesn’t solve art-direction problems.

Other solutions out there include:

Knowing how to do responsive web design is increasingly important in today’s world of diverse devices. Understanding what your client wants from their web pages should help you select the most appropriate tool.  

Create high-performance, touch-first, responsive apps with AngularJS directives, Bootstrap support and Microsoft MVC server-side widgets. Download Ignite UI today for your high-demand data needs.

 

Developer News - What's IN with the Infragistics Community? (2/22-3/6)

Viewing all 2398 articles
Browse latest View live