Quantcast
Channel: Infragistics Community
Viewing all 2398 articles
Browse latest View live

Business Intelligence for the C-Level on the Go

$
0
0

Why as C-Level staff do we follow financial news? Perhaps the most obvious reason is that without staying up to date with the market, you’re bound to miss opportunities, new trends and threats from your competitors. In the same way, it’s simply best practice to use your company’s internally held data to make better decisions, spot patterns and exploit new niches. Knowing what’s going on across your organization will inform how you develop company strategy and improve business success.

However, it’s a common theme in research with senior staff that they’re all too often frustrated by the way Business Intelligence is delivered. How often have you waited hours for IT staff to put together an Excel spreadsheet of company data? Are you often frustrated by how hard it seems to get a simple table showing what your sales figures are? When presenting to your shareholders, are you held back by confusing and hard to read line graphs?

You’re not alone. C-Level staff need Business Intelligence wherever they are, whenever they need it – from the boardroom to the hotel lobby.

How Big Data changed BI

We’ve never produced as much data as in the last decade. Companies and their servers are collecting vast quantities of information in a way that would never have been possible in the past. Just some of the data types we now store include:

  • Customer information – including their buying patterns
  • Omni channel consumer behavior
  • Sales figures by location and time
  • Human Resources data and all sorts of metrics on our own personnel
  • Fine-tuned and comparable information about supplier costs

All of this ‘Big Data’ has the potential to transform into Business Intelligence. BI – the ability to use data to arrive at actionable decisions - really can be improved and fine-tuned by using the data we collect more effectively.

In the past, using the data we collect in this way could only be achieved by asking data experts to analyze vast Excel spreadsheets and look for patterns. However, today’s BI tools – such as ReportPlus– bring that power back to the hands of C-Level staff, cutting out the middle man between you and your company’s figures.

Dashboards go mobile

C-Level staff have a major responsibility to their company, their employees and their shareholders. The strategic decisions you make can have an enormous impact on long term success, making it essential that your choices are informed by the latest company data. BI is now available in an instant, at the tap of a touchscreen. So, how might BI on the go help you?

1. Impress clients and shareholders

Nothing convinces like facts and figures. It’s one thing to tell, it’s quite another to show. Being able to display your company’s latest figures, highlight where you’re being successful and prove your predictions are more than empty talk is essential. Being able to draw any of your metrics up on the screen of an iPad or Android mobile will impress clients and boost shareholder confidence. No longer do you have to depend on memory or ugly spreadsheets – interactive charts and displays on ReportPlus’ dashboard turn raw statistics into compelling arguments.

2. On the road

When prospects are visiting your office, it’s easy to ask someone in your business analytics team to whip up the latest business facts and figures. However, when you’re travelling, you might need to create your own last minute reports using the latest company stats. ReportPlus connects with your company SharePoint and data sources via the cloud so you can instantaneously access company BI wherever you are.

3. Time = money

Before BI was available on smartphones and tablets, C-Level staff depended on other colleagues to provide the statistics and analysis they needed. Now however, you can access that data directly yourself, saving time and money. Got a gut feeling your sales team is underperforming? Worried you’re understaffed in certain outlets? Not convinced you have the resources to take the next step yet? BI tools like ReportPlus empower you to make decisions faster, without having to wait until someone else compiles the figures.

4. Be the first to spot a trend

C-Level staff are so valuable because of their high level perspective. As you steer your company forward through a challenging market, your knowledge, experience and awareness of the broader market should be combined with internal company statistics. Spotting patterns in your data and combining this with your own knowledge of the market helps you pinpoint new opportunities. In a fast paced sector, having instant access to that data from your iOS or Android mobile or tablet will help you outmaneuver competitors.

BI on the go

The most successful leaders combine knowledge of their company and market with the most cutting edge statistics and technology. When BI can help you spot trends, discover new opportunities while on the move and improve your company’s offering, you’ll have the power to do more, faster.

Tools like ReportPlus from Infragistics allow C level execs to draw their organization’s data into easy to use, interactive dashboards on a touchscreen device. ReportPlus is available on iOS, Android and will soon be released for desktop. Try ReportPlus free on iOS today.


Leveraging the Power of Asynchrony in ASP.NET

$
0
0

 

 

Asynchronous programming has had a lot of attention in the last couple of years and there are two key reasons for this: First it helps in providing a better user experience by not blocking the UI thread which avoids the hanging of the UI screen until the processing is done, and second, it helps in scaling up the system drastically without adding any extra hardware.

 

But writing asynchronous code and managing the thread gracefully was a tedious task. But as the benefits are tremendous, many new technologies and old technologies have started embracing it. Microsoft has also invested a lot in it since .NET 4.0 and then in .NET 4.5, they made it simpler than ever with the introduction of async and the await keyword.  

 

However, asynchrony has been usable in ASP.NET since the beginning but has never gotten the required attention. And given the way requests are handled by ASP.NET and IIS, asynchrony could be much more beneficial and we can easily scale up our ASP.NET applications drastically. With the introduction of new programming constructs like async and await, it’s high time that we start leveraging the power of asynchronous programming.

 

In this post, we are going to discuss the way requests are processed by IIS and ASP.NET, then we will see the places where we can introduce asynchrony in ASP.NET and discuss various scenarios where we can gain maximum benefits from it.

 

How are requests handled?

 

Every ASP.NET request has to go through IIS then is ultimately handled by ASP.NET handlers. A request is first received by IIS and after initial processing, forwards to ASP.NET which actually handles the request (for an ASP.NET request) and generates the response. This response is then sent back to the client via IIS. IIS has some worker threads that are responsible for taking the request from the queue and executing IIS modules and then forwarding the request to ASP.NET queue. ASP.NET doesn’t create any thread or own any thread pool to handle the request, instead it uses the CLR thread pool and gets the thread from there to handle the requests. The IIS module calls ThreadPool.QueueUserWorkItem which queues the request to CLR worker threads. As we know, the CLR thread pool is managed by CLR and self-tuned (meaning it creates/destroys the threads based on need). Also, we should keep in mind that creating and destroying a thread is always a heavy task and that’s why this pool allows us to reuse the same thread for multiple tasks. So let’s see pictorially the way a request gets processed.

 

 

 

In the above pic, we can see that a request is first received by HTTP.sys and added in the queue of the corresponding application pool at kernel level. One IIS worker thread takes the request from the queue and passes it to the ASP.NET queue after its processing. The request may get returned from IIS itself if it is not an ASP.NET Request. A thread from the CLR thread pool gets assigned to the thread that’s responsible for processing the request.

 

When should Asynchrony be used in ASP.NET?

 

Any request can broadly be divided in two types:

1-     CPU Bound

2-     I/O Bound

 

A CPU bound request needs CPU time and is executed in same process, while I/O bound requests are blocking in nature and dependent on other modules which do the I/O operations and return the response. Blocking requests are one of the major roadblocks for high scalable applications and in most of our web application we been wasting lots of time while waiting for I/O operations. These are the following scenarios where asynchrony should be used:

 

1-     For I/O Bound requests including

a.      Database access

b.      Reading/Writing Files

c.      Web Service calls

d.      Accessing Network resources

2-     Event driven requests like SignalR

3-     Where we need to get the data from multiple sources

 

Let’s create an example where we would create a simple Synchronous Page and then will convert it to an asynchronous page. For this example, I have put a delay of 1000ms (to mimic some heavy calls like Database/web service calls, etc.) and also downloaded one page using the WebClient as follows:

 

        protectedvoid Page_Load(object sender, EventArgs e)

        {

            System.Threading.Thread.Sleep(1000);

 

            WebClient client = newWebClient();

 

            string downloadedContent = client.DownloadString("https://msdn.microsoft.com/en-us/library/hh873175%28v=vs.110%29.aspx");

 

            dvcontainer.InnerHtml = downloadedContent;

        }

 

Now we will convert this page to an asynchronous page. There are mainly three steps involved here

 

1.      Change it to asynchronous page by adding Async = true in the page directive as follows:

<%@PageLanguage="C#"AutoEventWireup="true"CodeBehind="Home.aspx.cs"Inherits="AsyncTest.Home"Async="true"AsyncTimeout="3000"%>

 

I have also added AsyncTimeout (which is optional) but we can define based on our requirement.

 

2.      Convert the method to an Asynchronous one. Here we convert Thread.Sleep and client.DownloadString to the asynchronous method as follows:

                  

privateasyncTask AsyncWork()

        {

            awaitTask.Delay(1000);

 

            WebClient client = newWebClient();

 

            string downloadedContent = await client.DownloadStringTaskAsync("https://msdn.microsoft.com/en-us/library/hh873175%28v=vs.110%29.aspx ");

 

            dvcontainer.InnerHtml = downloadedContent;

 

        }

3.      Now we can call this method directly at Page_Load and make it asynchronous as follows:

 

protectedasyncvoid Page_Load(object sender, EventArgs e)

        {

            await AsyncWork();

        }

 

But here Page_Load returns the type is async void which should be avoided in almost all cases. As we know the flow of the page life cycle, Page_Load is a part of life cycle and if we set it as async then there could be scenarios and other events where the lifecycle may get executed even if the page load is still running. It is highly recommended we use RegisterAsyncTask which allows us to register the asynchronous method which is executed during the lifecycle while it is most appropriate and avoids any issue. So we should write it as follows:

 

        protectedvoid Page_Load(object sender, EventArgs e)

        {

            RegisterAsyncTask(newPageAsyncTask(AsyncWork));

        }

 

 

Now we have converted our page as asynchronous and it won’t be a blocking request.

 

I deployed both applications on IIS 8.5 and tested them with burst load. For the same machine configuration, the synchronous page was able to take just 1,000 requests in 2-3 seconds while the asynchronous page was able to serve more than 2,200 requests. After that, we started getting Timeout or Server Not Available errors. Although the average request processing time isn’t much different, we were able to serve more than 2X the requests just by making the page asynchronous. If that isn’t proof we should leverage the power of asynchronous programming, I don’t know what is!

 

There are some other places where we can introduce asynchrony in ASP.NET too:

 

1-      By writing asynchronous modules

2-      By writing Asynchronous HTTP Handlers using IHttpAsyncHandler or HttpTaskAsyncHandler

3-      Using web sockets or SignalR

 

Conclusion

In this post, we discussed asynchronous programming and saw that with the help of new async and await keywords, writing asynchronous code is very easy. We covered the topic of request processing by IIS and ASP.NET and discussed the scenarios where asynchrony can be more fruitful, and we also created a simple example and discussed the benefits of asynchronous pages. Last we explored some places where asynchrony can be leveraged in ASP.NET.

 

Thanks for reading!

 

 

Infragistics Ultimate 15.2 is here. Download and see its power in action!

 

 

Do programmers work better at night?

$
0
0

Here’s a pop quiz. What do Elvis Presley, Winston Churchill and Barack Obama have in common?

  1. They have kinda funny names
  2. They are/were known for being ‘night owls’
  3. They all like singing in the shower

That’s right – b. – they were all known to be night owls – according to this article anyway. Now, their sleeping patterns might not be the thing they’re most famous for, but the fact they tended to stay up all night working on their ideas might explain some of their success (whether it was for good or related to hound dogs).

 

And programmers have more in common with Mr. Presley, Mr. Churchill and Mr. Obama than you might think. Just like these famous figures, developers are also known for working late into the night – someone’s even written a book about it. So, what’s the explanation behind this? Why do some people seem to do their most complex, creative and challenging work out of time with the circadian rhythm?

 

Distraction, IQ, work flows

 No one quite understands why certain types of people – programmers included – do their most creative work late at night. So, we decided to go out and see what the different perspectives are on late night working in order to get to grips with the debates. Which do you think is right?

 

  1. 1.      There are less distractions at night

An email asking for your urgent help. Office jokes and conversations. News on Facebook and Twitter. Meetings. Endless meetings.

 

Working during the day is full of endless distractions. And this is extremely unproductive for developers. Building an app or a piece of software means you need to focus a large amount of brain power thinking creatively and constructively. You need to be able to ‘see’ the final product in its entirety and develop tools which will do that. The distractions of the daytime constantly shatter the delicate tool you’re building in your mind’s eye. At night, by contrast, there are far fewer distractions, meaning you can work for hours at a time without anyone interrupting your work.

 

  1. 2.      You’re more easily distracted when you’re tired

We never promised these theories would follow the same logic. In total contrast to the previous argument, a Scientific American article recently argued that our most creative behavior happens when we’re not at our best. The argument goes like this:

 

-        In the morning, you’re laser-focused on the task at hand. Your mind is clear, your thinking is lucid and you power through your task list in no time at all.

-        However, this kind of focused, concentrated behavior, is actually quite unproductive for creativity. Running against our basic presumptions, the argument goes we’re actually more creative when we’re tired and easily distracted.

-        When you’re tired, your decision making is on a par to being a little drunk (we wouldn’t recommend asking your boss for a beer at 9AM though). Because you’re less focused, different ideas come into your head more easily, you’re able to combine different sources of inspiration and think more laterally.

-        And this is useful for developers. When dealing with a particularly complex algorithm or User Experience problem, being open to new thoughts, and seeing the problem in a different way is actually really helpful.

 

  1. 3.      Successful programming needs long, uninterrupted work periods

Another explanation of a developer’s need to stay up late is that the kind of work we do is simply not suited to the regulated (and arbitrary) structure of a 9-5 job. Programming is, by its nature, a highly creative job – not dissimilar to painting or creating music. Trying to force such a creative job into a specific rhythm just doesn’t really work.

 

Project managers and admin people are focused on completing a series of specific tasks, making a 9-5 perfectly suitable for them. But the job of a developer is very different; it’s about diving deep into your subject, spending hours on minute details. Basically, it doesn’t fit well with the work of regular office jobs.

 

  1. 4.      Programmers have a higher IQ

According to a study with a large sample of American children, those who had a tendency to go to bed very late at night, tended to have higher IQs than those who preferred getting up at the crack of dawn. Long derided as a sign of laziness, it might just be that smart people like going to bed and waking up later than others.

 

Programming requires a high level of math knowledge, lateral thinking, logic and the ability to write precisely in a variety of languages. For whatever reason, smart people go to bed later, and programmers tend to be smart people.

 

And what about you?

 The jury’s still out on which of these explanations is the right one - although perhaps they all have a little truth in them. Let us know in the comments section below you’re current or previous sleeping patterns and why you think programmers seem to sleep so late.

 

 Infragistics Ultimate 15.2 is here. Download and see its power in action!

 

 

 

From Data to Decisions: How Do You Make Decisions?

$
0
0

What was the best decision you ever made? Was it when you chose the University you’d study at? Was it the time you decided to change your career? Was it when you threw caution to wind and took that month long vacation?

The decisions we take can change our lives. Of course, not every decision will have an enormous impact on your long term happiness or success (Indian takeaway or Chinese takeaway?). However, the way we actually make decisions – whether it’s in our personal lives or at work – is largely similar. The thing you’re deciding on could be enormous or insignificant, but the process of reaching a decision can often be almost identical.

The issue is that our decision making process excels for certain types of choices – the kinds of things that kept our ancestors alive and kicking. However, the modern world, and especially the world of work, asks us to make a lot of complex decisions which require rational thinking and to leave our ‘gut instinct’ aside. While ‘intuition’ still has a lot of currency in the world of work, it’s also important to consider a wide range of facts – especially when large sums of money and other peoples’ livelihoods are involved.

Business Intelligence is key to forming the best decisions and in today’s post we’ll look at how you can tap into your organization’s data to make the best decisions.

How do you arrive at decisions?

The vast majority of our decisions are made using heuristics. Heur-what-stics? Heuristics is, basically, a hard to pronounce word which describes how we make decisions by reducing complexity in the world around us.

Your mind is very good at making associations between things in the world around us, combining these with our experience, and helping us make decisions on the back of this. Heuristics is useful because it saves us a lot of time. We can think of the brain as a filter, constantly cutting out the noise and helping us focus on what’s important. If we didn’t have this capability, we’d basically never make any decisions about anything. Let’s illustrate this with a quick example.

You need some new pants and are walking through a mall in the city center on your lunch break. On arriving at the mall you look at the floor plan. You find a department store; once there you follow the sign to the men’s clothing department (assuming you’re a dude). Although there are lots of jeans available, you haven’t got a lot of time, so instantly pick a pair made by a well-known brand.

Heuristics helps you navigate this situation quickly and effectively. There were probably plenty of pants-selling stores you could have visited in the mall, but you chose the department store unconsciously because you knew you’d find what you were looking for there easily. When you chose the well-known jeans brand, heuristics came into play again. You associated it with quality, paying more on trust that the designers know what they’re doing.

In both these situations, you could have spent a long time analyzing every single option. You could have visited every store in the mall trying out their stock. You could have compared all the jeans in the department store, examining how they were stitched, testing the strength of the rivets on the pockets. But, you would get nowhere in life if you took this approach to everything. Heuristics helps you make decisions fast.

However, not all decisions are made heuristically. Daniel Kahnemann, Nobel Prize winning  cognitive psychologist and author of Thinking, Fast and Slow describes two different types of thought. One is our quick, heuristic thinking. The other is slow, deliberate and logical.

So, what’s this got to do with Business Intelligence?

The decisions we make in the workplace should combine the two kinds of thinking humans are capable of. In high pressure situations, you can’t spend hours dithering about which course to follow. Nonetheless, this isn’t always the most appropriate method. Heuristics is, inevitably, based on past experience, prejudice and unconscious ‘instinct’. This can be useful in some case, yet in others it will hold you back. 

For example, if you’re deciding to expand operations into a new market, this isn’t the kind of thing that should be done on a whim. You need to be sure there’s actually a market for your product, that you have the resources to set up new offices and hire new people. If this kind of decision isn’t made with solid facts backing it up, you risk making major blunders. Yet, as the graph below shows, a lot of decision makers keep making choices based on heuristic methods:

Research: Which of the following best describes your personal approach to making significant management decisions?

Source: Economist Insights

Business Intelligence can help decision makers avoid making major decisions ‘on a whim’. Tools like ReportPlus from Infragistics allow any decision maker to draw their organization’s data into easy to use, interactive dashboards on a touchscreen device. This allows rapid access to hard facts and means fewer decisions are based on instinct and more decisions are based on logical and deliberate thinking.

ReportPlus is available on iOS, Android and will soon be released for desktop. Try ReportPlus free on iOS today. It might be the best decision you ever make.  

Do programmers work better at night?

$
0
0

Here’s a pop quiz. What do Elvis Presley, Winston Churchill and Barack Obama have in common?

  1. They have kinda funny names
  2. They are/were known for being ‘night owls’
  3. They all like singing in the shower

That’s right – b. – they were all known to be night owls – according to this article anyway. Now, their sleeping patterns might not be the thing they’re most famous for, but the fact they tended to stay up all night working on their ideas might explain some of their success (whether it was for good or related to hound dogs).

 

And programmers have more in common with Mr. Presley, Mr. Churchill and Mr. Obama than you might think. Just like these famous figures, developers are also known for working late into the night – someone’s even written a book about it. So, what’s the explanation behind this? Why do some people seem to do their most complex, creative and challenging work out of time with the circadian rhythm?

Distraction, IQ, work flows

 No one quite understands why certain types of people – programmers included – do their most creative work late at night. So, we decided to go out and see what the different perspectives are on late night working in order to get to grips with the debates. Which do you think is right?

 1. There are less distractions at night

An email asking for your urgent help. Office jokes and conversations. News on Facebook and Twitter. Meetings. Endless meetings.

Working during the day is full of endless distractions. And this is extremely unproductive for developers. Building an app or a piece of software means you need to focus a large amount of brain power thinking creatively and constructively. You need to be able to ‘see’ the final product in its entirety and develop tools which will do that. The distractions of the daytime constantly shatter the delicate tool you’re building in your mind’s eye. At night, by contrast, there are far fewer distractions, meaning you can work for hours at a time without anyone interrupting your work.

 2.  You’re more easily distracted when you’re tired

We never promised these theories would follow the same logic. In total contrast to the previous argument, a Scientific American article recently argued that our most creative behavior happens when we’re not at our best. The argument goes like this:

-        In the morning, you’re laser-focused on the task at hand. Your mind is clear, your thinking is lucid and you power through your task list in no time at all.

-        However, this kind of focused, concentrated behavior, is actually quite unproductive for creativity. Running against our basic presumptions, the argument goes we’re actually more creative when we’re tired and easily distracted.

-        When you’re tired, your decision making is on a par to being a little drunk (we wouldn’t recommend asking your boss for a beer at 9AM though). Because you’re less focused, different ideas come into your head more easily, you’re able to combine different sources of inspiration and think more laterally.

-        And this is useful for developers. When dealing with a particularly complex algorithm or User Experience problem, being open to new thoughts, and seeing the problem in a different way is actually really helpful.

 3. Successful programming needs long, uninterrupted work periods

Another explanation of a developer’s need to stay up late is that the kind of work we do is simply not suited to the regulated (and arbitrary) structure of a 9-5 job. Programming is, by its nature, a highly creative job – not dissimilar to painting or creating music. Trying to force such a creative job into a specific rhythm just doesn’t really work.

Project managers and admin people are focused on completing a series of specific tasks, making a 9-5 perfectly suitable for them. But the job of a developer is very different; it’s about diving deep into your subject, spending hours on minute details. Basically, it doesn’t fit well with the work of regular office jobs.

 4. Programmers have a higher IQ

According to a study with a large sample of American children, those who had a tendency to go to bed very late at night, tended to have higher IQs than those who preferred getting up at the crack of dawn. Long derided as a sign of laziness, it might just be that smart people like going to bed and waking up later than others.

Programming requires a high level of math knowledge, lateral thinking, logic and the ability to write precisely in a variety of languages. For whatever reason, smart people go to bed later, and programmers tend to be smart people.

And what about you?

 The jury’s still out on which of these explanations is the right one - although perhaps they all have a little truth in them. Let us know in the comments section below you’re current or previous sleeping patterns and why you think programmers seem to sleep so late.

 

 Infragistics Ultimate 15.2 is here. Download and see its power in action!

 

 

 

Mobile Support for SharePoint?

$
0
0

Since 2001 SharePoint has been one of the key industry players when it comes to document management, records management, and intranets. Its widespread popularity has made it one of Microsoft’s most successful products yet. To put this success in perspective, in terms of revenue Microsoft earns more money with SharePoint than it does with Windows.

However, from a technological point of view, a lot of things have changed in the way we interact with our documents and data, since SharePoint was first released. These changes mean its once revolutionary powers sometimes feel a little behind the curve. By far the most obvious change here is that people now bring their own tablets and smartphones into the workplace. As a result, businesses have almost no choice but to support Bring Your Own Device (BYOD) policies. The issue here? SharePoint is a lot slicker on a monitor than it is on a 3-inch screen. 

SharePoint is not so mobile for intranets

To be honest, mobile support for SharePoint when it comes to web content management has never been state of the art. Where other web applications had a “mobile first” view with responsive designs, SharePoint has some flaws when it comes to mobile support. For example, out-of-the-box, a mobile user is delivered a totally different mobile view to what they’re used to on the desktop. This works fine for browsing document libraries, but doesn’t work in web content management scenarios.

Nowadays, the standard is to support mobile users by providing a responsive design. To put it bluntly, SharePoint just isn’t responsive. It requires custom masterpages and page layouts to deliver a responsive design but custom masterpages are advised against, especially in SharePoint Online.

Things did improve somewhat in SharePoint 2013 with device channels and improved mobile views. Also, Office Web Apps are pretty mobile-friendly now, so for document management it works pretty well on the go. However, for Intranet or content management applications, SharePoint 2013 could hardly be described as mobile friendly.

NextGen Portals: things will get better!

The current site templates available in SharePoint are not focused on mobility. On the other hand, the new ‘NextGen Portals’ do have a bigger focus on mobility. These portals are only available in Office 365 and are still a work in progress. The first portal which has been made generally available is the Office 365 Videos portal - it has a very modern look & feel, and has a strong focus on mobility - and what we think is it looks awesome on every device. Microsoft have more portals under construction and these are likely to be released over the next few months. This Sway document outlines all the aspects of the NextGen portals we can look forward.

Three areas where SharePoint can do a better job on mobile

We’ve found that there are three key areas where SharePoint lets users down when they’re using mobile devices:

Adding, editing, and deleting files while on the move

This only applies to operating systems other than Windows Phone. With Android and iOS there is currently no app that makes it easy to get an overview of all team sites, and to be able to add, edit, or delete files. Windows Phone users have the Office app installed by default, which makes it a breeze to do this, but iOS and Android users don’t get this app. They do of course have access to the mobile apps for Word, Excel, and PowerPoint, but these only offer document editing capabilities - and not the option to store files directly in SharePoint.

Social features

SharePoint’s social features have never been top of the range when compared to other products. And when Microsoft acquired Yammer, the social features built-in SharePoint stopped receiving significant upgrades. As a result there is no real support for SharePoint’s social features for mobile users out-of-the-box (although SharePlus from Infragistics does fill this gap by bringing SharePoint’s social features into iOS and Android).

Enterprise Search and offline use

Without a proper search engine, people cannot find the documents they need. SharePoint has an excellent search engine which is the same engine that drives Bing. And, the new Office Graph is also delivered by the same search engine. However, the search experience on a mobile phone is not that great - it works, but that’s about it. For example, there is no concept of an offline search. SharePlus however, does bring the enterprise search engine to mobile devices, by delivering a user interface that taps into the SharePoint search and brings offline search, search across all documents, and much more.

Mobile is now - and the future

We now take mobility for granted when it comes to applications in the enterprise. Employees expect to be able to use their mobile phones to access company applications, and this remains the case when it comes to SharePoint. While Microsoft has improved the platform for mobile users over time, there is still a lot of room for improvement. To see what that improvement could look like, start your ten day SharePlus trial today.

Developer Humor: Coding Logic

$
0
0

Sometimes, your weekend needs a little pick-me-up. That's where this week's Developer Humor comes in! Hope you enjoy!

Share With The Code Below!

<a href="http://www.infragistics.com/products/jquery;><img src=" http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/2318.Tech-toon-_2D00_-2015-_2D00_-med-res-05.jpg"/> </a><br /><br /><br />Developer Humor: Coding Logic <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Developer Quotes: Edition 2

$
0
0

This is the second installment of Developer Quotes!

If you missed the first one, you can check it out here. Also, if you have any favorite quotes you'd like to see brought to life, feel free to leave them in a comment below and maybe you'll see them in a future edition!

Share With The Code Below!

<a href="http://www.infragistics.com/products/jquery;><img src=" http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/3716.beta.jpg"/> </a><br /><br /><br />DeveloperQuotes - Beta <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Share With The Code Below!

<a href="http://www.infragistics.com/products/jquery;><img src=" http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/4721.Berard.jpg"/> </a><br /><br /><br />DeveloperQuotes - Berard <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Tips & Tricks in Visual Studio 2015 – Part 1

$
0
0

This is the first blog post of a series that I plan on writing about Visual Studio 2015 tips and tricks. In part 1 of the series, we’ll look at the some of the new features in Visual Studio 2015 including multiple sign-in, device preview for Universal Windows Apps, the integration option in Blend, the Xaml Peek feature, and more.

1. Multiple Sign-In

When you launch Visual Studio 2015 for the first time, you will be provided with an option to login with your Live account to synchronize your IDE settings. There are times when you might want to switch user accounts within the IDE. You can use the Account Settings option in Visual Studio 2015 to do that.

To launch the account settings screen, select your name from the top right corner of Visual Studio 2015. If you haven’t signed in yet, you will see a “sign in” link with which you can login to your live account.

The Account settings screen lets developers personalize their account. You can sign-out, login and add additional accounts. With the "Add an account" link, you can add more than one account to your Visual Studio 2015. This might be useful especially when you are working on multiple projects and each needs to have a different IDE setting.

2. Device Preview menu

When developing a Universal Windows Platform app, there are times when you might want to preview the page in the Xaml designer. To do this, you can use the new Device Preview menu bar. This new feature will show how your Xaml -based page would render on various devices.

The Device Preview menu is displayed just above the Xaml page in the Designer as shown in the screenshot below.

3. Integration with Blend

Generally, UI Designers use Blend to design the UI using Xaml with the help of design styles, animations etc. The developers use Visual Studio 2015 to write C# code and extend functionalities of the design. It could be possible that both the Designer and the Developer work on the same Xaml file, which would require the updated file to be reloaded by the IDE each time.

The above dialog lets the users know that the file was updated externally. In order to reload the file, you’d click the "Yes" or "Yes to All" button.  Once you do that, the updates that are made in the Blend IDE will be reflected in Visual Studio 2015 too.

Microsoft Visual Studio 2015 includes a new setting that lets you manage this behaviour, and you can access it by navigating to Tools -> Options. In the Options Dialog, select Environment -> Documents from the left sidebar.

Select the option "Reload modified files unless there are unsaved changes" and "Detect when file is changed outside the environment" and click ok.

4. Xaml Peek feature in Visual Studio 2015

The Peek Definition was one of the interesting functionalities that were available in Microsoft Visual Studio 2013. This feature allows developers to view and edit the code available in different files without even switching windows. You can use the shortcut Alt + f12 to open the Peek Definition window.

This feature is now available in the Xaml code editor too. You can select the code in the Xaml code editor and select Peek definition from the context menu and start modifying the code in the Peek Definition window.

 

 

5. Indication of Unused Namespace in the Code Editor

Microsoft Visual Studio 2015 provides an indication to the developer that certain namespaces are unused in the code. Visual Studio displays them in a grey color and the user can remove them from the current page since they are not used.

 

 So there you have it! These are just a few of the great new features in Visual Studio 2015 – stay tuned for part 2 when we dive into even more tips and tricks.

 

Infragistics Ultimate 15.2 is here. Download and see its power in action!

What are ViewData, ViewBag, and TempData in ASP.NET MVC?

$
0
0

 

I have often seen entry level developers struggle with the differences between and usage of ViewData, ViewBag and TempData in ASP.NET MVC. And while there are many articles and blog posts on this topic out there, I’ll try to explain it simply.

To start with, ViewData, ViewBag, and TempData all three are objects in ASP.NET MVC that are used to carry or pass data in different scenario. You may have a requirement to pass data in the following cases:

·        Pass data from controller to view;

·        Pass data from one controller to another controller;

·        Pass data from one action to another action;

·        Pass data between subsequent HTTP requests

 

At a higher level, we can depict the use of ViewData, ViewBag, and TempData as shown in the image below:

                        

Passing Data from Controller to View

Let us consider a scenario where you’re passing data from controller to view. Usually we pass complex data to the view using the model. Here let’s say we have a strongly typed View which is using the data model List as shown in the listing below:

publicActionResult Index()

        {

            List<Product> p = newList<Product>() {

 

               newProduct { Id = 1, Name = "Pen", Price = 300 },

               newProduct { Id = 2, Name = "Pencil", Price = 100 }

            };

 

            return View(p);

        }

On the View, data is displayed by rendering the model as shown in the listing below:

<tableclass="table">

    <tr>

        <th>

            @Html.DisplayNameFor(model => model.Name)

        th>

        <th>

            @Html.DisplayNameFor(model => model.Price)

        th>

        <th>th>

    tr>

 

@foreach (var item in Model) {

    <tr>

        <td>

            @Html.DisplayFor(modelItem => item.Name)

        td>

        <td>

            @Html.DisplayFor(modelItem => item.Price)

        td>

        <td>

            @Html.ActionLink("Edit", "Edit", new { id=item.Id }) |

            @Html.ActionLink("Details", "Details", new { id=item.Id }) |

            @Html.ActionLink("Delete", "Delete", new { id=item.Id })

        td>

    tr>

}

 

table>

 

Now we have a requirement to pass data (other than model) to the view from the controller. There are two possible ways data can be passed.

Let us assume that we want to pass a simple string to the view besides the Product data model.  

Passing data using ViewBag

We can pass data using the ViewBag as shown in the listing below:

publicActionResult Index()

        {

            ViewBag.data1 = "I am ViewBag data";

            return View(p);

        }

 

On the view, ViewBag data can be read as the property of the ViewBag as shown in the listing below:

<h2>@ViewBag.data1h2>

 

Passing data using ViewData

We can pass data using the ViewData as shown in the listing below:

 

publicActionResult Index()

        {

           

            ViewData["data1"] = "I am ViewBag data";

            return View(p);

        }


On the view, ViewData data can be read as the string value pair of the ViewData as shown in the listing below:

<h2>@ViewData["data1"]h2>

 

Let us examine the differences between ViewData and ViewBag. ViewBag is a dynamic property which is based on the dynamic type, whereas ViewData is a dictionary object. We can read data from ViewBag as a property and from ViewData as key- value pair. Some bullet points about both are as follows:

ViewData

·        It’s a property of type ViewDataDictionary class.

·        Data can be passed in the form of a key-value pair.

·        To read the complex type data on the view, typecasting is required.

·        To avoid exception, null checking is required.

·        Life of ViewData is restricted to the current request and becomes Null on redirection.

·        ViewData is a property of the ControllerBase class


ViewBag

·        It’s a property of dynamic type.

·        Data is passed as a property of the object.

·        There is no need of typecasting to read the data.

·        There is no need of null checking.

·        Life of ViewBag is restricted to the current request and becomes Null on redirection.

·        ViewBag is property of ControllerBase class.

 

In the ControllerBase class, both are defined as property as shown in the image below:

 

We can summarize ViewBag and ViewData as objects that are used to pass data from controller to view in a single cycle. Value assigned in ViewBag and ViewData get nullified in the next HTPP request or navigating to other view.

 

TempData

One of the major attributes of both ViewData and ViewBag are that their lifecycle is limited to one HTTP request. On redirection, they lose the data. We may have other scenario to pass data from one HTTP request to the next HTTP request; for example, passing data from one controller to another controller or one action to other action. TempData is used to pass data from one request to the next request.

 

 

 

Let us say that we want to navigate to Read action from Index action and while navigating, pass data to the Read action from the Index action.  So in the Index action, we can assigne value to TempData as shown in the listing below:

  publicActionResult Index()

        {

          TempData["data1"] = "I am from different action";

          return RedirectToAction("Read");

         

        }

 

We can read TempData as a key-value pair. In the Read action, TempData can be read as shown in the listing below:

publicstring Read()

        {

            string str;

            str = TempData["data1"].ToString();

            return str;

        }

 

Like ViewData, TempData is also a dictionary object and to read the data, typecasting and null checking is required. Keep in mind that TempData can persist data only to the subsequent HTTP request. When you are very sure about the redirection, then use TempData to pass the data.

Some points about TempData are as follows:

·        TempData is used to pass data from one HTTP request to next HTTP request.

o   In other words, TempData is used to pass data from one controller to another controller or action to another action.

·        TempData is a property of BaseController class.

·        TempData stores data in a session object

·        TempData is a property of ControllerBase class

·        To read data,  Typecasting and null checking is required.

·        Type of TempData is TempDataDictionary.

·        TempData works with HTTP redirection like HTTP 302/303 status code

 

Summary

ViewData, ViewBag, and TempData are used to pass data between controller, action, and views. To pass data from controller to view, either ViewData or ViewBag can be used. To pass data from one controller to another controller, TempData can be used.

I hope now the concepts of ViewBag, ViewData and TempData are a bit clearer – and thanks for reading!

 

Try our jQuery HTML5 controls for your web apps and take immediate advantage of their stunning capabilities. Download Free Trial now!

 

How Mobile is Changing the Use of SharePoint?

$
0
0

You only need go back a six or seven years and ‘mobile computing’ was still ‘the future’. Tablets didn’t really exist, certainly as we know them now (the first iPad arrived in 2010), and phones were very different beasts. Few, but a hard core of Blackberry wielding super-fans, would have described the brick in their pocket as a productivity tool. We played games on our phones, sent text messages, and struggled with a very basic and slow moving Internet - that was about it.

Effective mobile productivity was an even more remote proposition in the workplace. Hard to believe, but here was a time when you simply couldn’t sync a Dropbox full of content to a device, scan documents, or create PowerPoint slides. Phones were just for calling Michael in Accounts and chasing paperwork over at a distant office.

Fast forward to today however and things are very different. So different in fact that one has to force oneself to take a step back and realize just how powerful the typical phone has become. As an unscientific test I had a quick look at the apps I am currently using:

  • Email– Now a core part of nearly every phone.
  • Camera– I can take photos and videos, in incredible quality, and share them direct from my phone.
  • Maps– Not only does my phone know where I am, but it can direct me pretty much anywhere in the world (with fairly accurate time estimates and public transport information).
  • Wallet – I can now pay for my travel and purchases directly from my phone
  • Office apps – I can open, read and edit all major office file formats
  • File sync – I have all my personal and work files synced to my phone, available when and where I need them.

Impact on Office 365 and SharePoint

The last two apps are very clearly work use cases, and my phone is now a powerhouse of productivity in the office. So how have the old titans of the enterprise, tools like SharePoint, responded to this ‘mobile first’ world?

Well the most obvious answer to that question is Office 365. Office 365 is, in part Microsoft answer to mobile. But before exploring that a little further, let’s go back to the early days of SharePoint.

SharePoint’s relationship with mobile has been fractured to say the least. Before SharePoint 2007 there was little consideration at all. SharePoint 2007 (or Microsoft Office SharePoint Server 2007 as we had to call it then) improved things a little, with stripped down mobile pages designed for speed. The 2010 release was an incremental improvement, but it took the 2013 version for real improvements. A much better mobile experience was provided, with Device Channels able to push content in different formats to different devices. Push notifications were supported, and the first version of Office Web Apps allowed documents to be viewed (if not edited).

With the 2013 release we also got the first set of mobile apps, though the basic (and now deprecated) ‘Newsfeed’ app wasn’t of too much use. Things improved as the likes of OneDrive for Business, Office, Yammer and Skype joined the fold.

A new way of thinking about SharePoint and mobile

But it was really the creation of Office 365, originally known as Business Productivity Online Services (BPOS), that crystalized Microsoft’s vision of how mobile could change SharePoint.

For many years SharePoint was about ‘document management’. You uploaded files, stored them in SharePoint, and edited them in a disconnected Office application like Word. With Office Web Apps, and then Office 365 itself, Microsoft has developed a more joined up experience. Office 365 is now more rounded, in terms of documents certainly (let’s leave its Outlook/Email and Lync/Messaging elements to side for another post). Users can store documents in SharePoint Online as they always did, or in OneDrive for Business. But now they can edit them in a connected instance of their traditional Word application on the desktop. Or they can use the very impressive Word Online in a browser (including mobile browsers). Or they can use the iPad app, or the Android variant. No matter how users want to work with their documents, they can do it in a single environment, with a single subscription and using a single license. Unfortunately, not all functionalities of SharePoint are available as native mobile apps, so the "mobile" SharePoint experience is still too fragmented to live up to what users are presented with in a web SharePoint environment.

Office 365 didn’t start off as super mobile friendly. It took a more unified internal Microsoft (started by Steve Ballmer and accelerated by Satya Nadella) to realize this. But Office 365 today is the manifestation of how mobile has changed SharePoint.

Where does this leave SharePoint?

Indeed, SharePoint has almost become a victim of its adaptability to a more mobile use case. Looking at Office 365 these days and you don’t even see the term ‘SharePoint’). Those of us that have been around long enough know that ‘Sites’ is SharePoint Online (and we still recognize that ‘S’ icon) but it is much more of a platform tool than a product in its own right. This much is true online, on-premises things are a little different and the upcoming 2016 release shows SharePoint is still a force to be reckoned with.

Users are the winners

Ultimately users have won out. The devices in their pockets have gotten exponentially more powerful. Microsoft have responded, albeit after a few missteps, with a revamped SharePoint that supports some of these new ways of working. But full SharePoint capabilities are not yet available as a true native mobile experience.

Looking to mobilize your corporate SharePoint? Check out Infragistics' SharePlus' demo: the ultimate mobile productivity solution for SharePoint and Office 365.

Visualizing Images

$
0
0

I think it's an interesting exercise to visualize data contained in a photograph as we might other datasets. Some cameras and photo-editing software do just this, constructing histograms from the red (R), green (G) and blue (B) values (the three "primaries") in the image to help expert photographers/editors judge and improve color balance. To illustrate this idea, I'll use the color photograph and its grayscale counterpart below.

It's easiest to start with the grayscale image where the R, G and B values are, by definition, identical for any given pixel. On the left we have a grayscale value of 0, meaning an (R, G, B) value of (0, 0, 0), a.k.a. black; on the right it is (255, 255, 255) or white. In between we have 254 shades of gray.

This is one of the few cases where use of a gradient in a chart may actually help clarify things, in this case by coloring each bar according to the grayscale value it represents. The only problem with this is then picking a background color that is neutral:

With a color image we need to plot each of the R, G, and B histograms separately. It's less cluttered to use lines rather than bars, but with a sensible color scheme they probably don't need labeling.

Aside from achieving a good color balance, I think there is another issue that is interesting to explore that is important for web development: compression. JPEG compression is a fairly complex topic and I'll confess I've yet to really get my head around it. The color photo above is a high-quality (low-compression) miniature of the original (eight megapixel) image I shot. It is 46 kilobytes in size. The image below is the same in terms of number of pixels but is more highly compressed and of lower quality. It is, however, only nine kilobytes in size.

The lower quality should be obvious in the photo itself, but does the RGB histogram look significantly different? Before trying this out I genuinely had no idea what to expect. While the histograms below are clearly different to the ones above they don't really scream "this image has been compressed much more than the other one".

An alternative to the JPEG is the PNG. Here's an 8-bit PNG version of the photograph:

This image looks a lot better than the low-quality JPEG, but it's also six times bigger than it (in terms of file size) and a third bigger than the high-quality JPEG. The histogram is also very very different and difficult to read (note also the difference in the vertical scale).

This chaotic histogram is a result of the fact that only 256 different colors can be used in an 8-bit PNG (just like a GIF).

The histogram data above can be extracted from (at least some versions of) Photoshop and presumably other photo editing software. But having not found any major difference between the histograms for the high- and low-quality JPEGs I was curious to see whether more complex representations of the RGB data would highlight the difference. This required extracting all the individual RGB values, not just sums for the different channels. I didn't know how to do this in Photoshop so I used R (other options are available). In an ideal world a 3D scatter plot would work brilliantly - one dimension for each of R, G and B. But 3D scatter plots on 2D screens rarely work. So I opted for a more conventional 2D scatter plot, using point color to show the third color. In the examples below I (somewhat arbitrarily) opted to plot blue value against red value, and colored the points according to the green value. For instance, a data point representing an image pixel with an RGB value of (30, 96, 92) would be plotted at the point (30, 92) on the chart and have an RGB color of (0, 96, 0).

My original attempt to do this suffered from some serious overplotting issues. To reduce, though not remove, this issue I made the points much smaller and took a random sample of "just" 20,000 points (just under 20% of the data) for each image. I also added the marginal distributions (i.e. the relevant histograms) at the unlabeled extremities of the plot. These were derived from all the image pixels, not just the sample.

Now we can see a difference between the high- and low-quality JPEGs: the latter has much more pronounced diagonal bands of points and gaps. Examining the data, there are around 20,000 different RGB values in the ~105,000 pixels of the high-quality JPEG but only 15,000 in the low-quality JPEG. As for the PNG plot, that is an extreme example of overplotting. There really was 20,000 points plotted, they just occupy only a few hundred different positions.

Ultimately you can probably get through life as, say, a web designer or developer without understanding the intricacies of image compression - I'm still not all that sure how JPEG compression actually works. But I think it's interesting to know the underlying data is there to be played with and for constructing your own dataviz experiments.

Try our jQuery HTML5 controls for your web apps and take immediate advantage of stunning data visualization capabilities. Download Free Trial now!

Karmic Cascade

$
0
0

To break up the usual code-filled technical blogs you see all day, we’ve got something a little different for you today. Our friend Remy Porter from The Daily WTF has written a guest post about the power of teamwork. You can read more of Remy’s work at The Daily WTF, and you can find him on Twitter too. Enjoy!

George didn’t exactly enjoy his job. He was a senior developer, doing software for a credit reporting and collections agency, which was neither “sexy” nor stimulating. He had some great co-workers, though, like his junior developer, Elaine, or the head of IT, Mr. Thomassulo.

Unfortunately, no number of great co-workers could make up for having to work with Lloyd. Lloyd was another senior developer, and Lloyd was a problem. Lloyd wasn’t just a senior developer, he had once been the IT director of a major hospital system. This was a fact he liked to repeat, ad nauseum: “I know what I’m doing,” he would say. “I used to run IT for six hospitals. I supported thousands of users, and it was life or death. I know what I’m doing.”

He knew what he was doing so well that he eschewed the organizational coding conventions. He knew what he was doing so well, that he would unilaterally make changes to applications, without discussing it with other developers, management, or the end users. When functionality inevitably broke as a result of his meddling, it was the extant code that was wrong, because “I know what I’m doing.” He never met a new library or API that he didn’t hate, because he already knew what he was doing, and any changes to his process were bad. At one point, he broke a turn-key, purchased application by cracking open the database and modifying the underlying schema, because “I know what I’m doing.”

The work was boring, but it was really Lloyd who put George over the edge. Constantly fighting with Lloyd, constantly cleaning up Lloyd’s messes, and constantly being undercut by Lloyd in front of management soured him. George started planning a change of job, but before he could leave, he was given One Last Project.

The project itself was the sort of thing that breeds quietly inside of “enterprise organizations”. Their customers- mostly property management and utility companies- needed to constantly exchange legal documents with George’s company. These documents had a processing workflow. There were plenty of applications they could buy which would solve this problem out of the box, but the combination of “not invented here”, “our needs are too specific for a generic product”, and “our capital budget is empty, but our development budget is full”, meant that they were building the application in-house.

George was the lead on the project, but George wasn’t giving it his full attention- he was trying to transition to a job that didn’t have a Lloyd. Of course, before he even saw a requirements document, the project was six months behind schedule. A bunch of middle managers had decided to hitch their wagons to this single horse, and that meant meddling. “Let’s set up a new CI server for just this project!” “Let’s switch to using Entity Framework, even though nobody at our organization has actually looked at the technology yet!” “We should do Ajax!” “The PMO has decided we will use a new methodology on this project, which we’re calling Scrum.” “I know our internal browser standard is IE8, but we’re going to do this in HTML5!”

The result was not George’s best work. The code could have been cleaner, the source control history could have been much cleaner, and he made a mistake. Entity Framework, when used in the “Code First” style, could generate your database schema from your object model. Something neither George nor Elaine discovered until the product was released, was that by default, it would enable cascading deletes on certain types of relationship. Specifically, there was a cascading relationship from entities in the StatusCode table to the documents which depended on those status codes.

George did discover his mistake, and so he did the best thing he could on his way out the door. He told Elaine, he documented the flaw, and he told Mr. Thomassulo, and gave him suggestions for fixing it. They could fix- or not fix- the flaw at their leisure. Since the application had no functionality for deleting status codes, Mr. Thomassulo made the call that this was a “don’t fix”.

George smiled, nodded, and moved on to his next job. Eight months went by, and George spent those months with people who weren’t Lloyd. He kept in touch with Elaine, exchanged the odd email with Mr. Thomassulo, but mostly thought that phase of his career was over.

Until, one afternoon, Elaine called his cellphone. “I am so happy, and so upset with you right now.”

Lloyd’s wanderings through the company’s codebase eventually brought him to the document management application. Lloyd saw that one of the status-codes was named, “Pre-Approval”, and since he ran IT for six hospitals, and knew what he was doing, he knew that the status code was wrong. He knew that it should be, “Waiting for approval…”. He didn’t need to check this over with the users, since he knew what he was doing. He didn’t need to discuss this with the other developers, since he knew what he was doing. He didn’t need to read the documentation, because he knew what he was doing. And he certainly didn’t need to pilot his changes in test, because he knew what he was doing.

If Lloyd had just done the simple thing: UPDATE StatusCodes SET Name='Waiting for Approval…' WHERE id=5, he would have had no problems. That’s what the documentation George had left behind recommended. But Lloyd knew what he was doing, and chose instead to DELETE the old status code, then INSERT a fresh one, then UPDATE the broken records. Of course, he didn’t get past the DELETE step before everything went terribly, terribly wrong. Half of the documents the users were actively working with vanished from the database.

“So,” Elaine explained, “I’m furious with you, because I got roped in to help Lloyd with fixing this. But I got smart- I offered to take on Lloyd’s other work, so that he could focus on this issue.”

Lloyd, of course, knew what he was doing, and thus didn’t need any help. Lloyd saw the problem of missing data, saw that the documents still were actually stored in a network share, saw that with some of the supporting meta-data tables, he could reconstruct the lost data using inference. So that’s what he did. Manually. After all, he knew what he was doing.

No one felt the need to stop him. Instead, Elaine and Mr. Thomassulo looked on, in awe, as Lloyd brute-forced his way to insanity. Lloyd had destroyed the data fresh and early at 8AM, and worked at rekeying the data the rest of the day. He didn’t leave his desk once.

“So, of course,” Elaine told George, “we waited until the end of the day. Then I walked over, knocked on his cube wall, and he snapped at me. He was just mean, so I put on my sweetest smile, I leaned in, and I said, ‘Hey, Lloyd, you know we have backups, right?’ And the look on his face- that’s why I’m happy with you.”

“Did you get a picture?” George asked. “Because if I don’t get to see that face too, it’s my turn to be upset with you!”

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial now and see what it can do for you!

BDD in .NET for Complete Initiates

$
0
0

It's pretty likely that you've heard of behavior-driven development, or BDD.  Maybe it's just in the context of buzzword fatigue and wondering "how many different approaches to software have acronyms that end with DD?"  Whatever your level of cynicism, or lack thereof, BDD is worth a look.

A lot of my work over the last few years has involved coaching and mentoring on the subject of writing clean code, and I often tell initially skeptical developers that they should be writing methods that BAs and managers could more or less read (in places pertaining to business logic, anyway).  This isn't as far-fetched as it sounds.  Think of a bit of code that looked like this.

public bool IsCustomerOrderValid(CustomerOrder orderToBeEvaluated)
{
    foreach(var individualLineItem in orderToBeEvaluated.LineItems)
    {
        if (!_productStockChecker.DoWeHaveInStock(individualLineItem.Product))
            return false;
    }
    return true;
}


Would it really be such a stretch to imagine a non-technical person being able to look at this and understand what was happening? Take an order to be evaluated, look through each of its line items, and check to see if the product they contain is in stock. You don't need to be a programmer to have an idea of what's happening here.

BDD From 10,000 Feet

BDD in essence, is taking this idea and expanding upon it by making domain-oriented conversation a part of software acceptance.  Don't worry about "how" just yet.  Suffice it to say that you and various non-technical stakeholders can sit down together and write tests, in plain English, that can be run to demonstrate that system requirements are being met.  That's pretty powerful.

To understand the how we must first take a small detour back in time.  BDD emerged as flavor of test driven development (TDD).  In test driven development, each modification to the production code is driven by a failing unit test.  This gave rise to a lot of tests with the spirit of (for instance), "when I pass null to this class constructor, it should throw a null argument exception" alongside of tests that expressed business purpose.  TDD isn't specific, per se, about the level of granularity of the tests that you write to drive production code modifications.

BDD emerged as an extension and narrowing of this process by having more preferences as to the nature of the tests.  The tests themselves start to take on the following properties.

  1. Descriptive, conversational names
  2. Expressions of acceptance criteria of the software
  3. At a level of granularity that is meaningful to users/stakeholders of the software.

So now, there's a framework where you drive all modifications to production code by describing, with an executable specification, a current shortcoming of the system.  To bring this into the realm of specifics, consider this example of BDD that you'll see a lot more of as time goes on.  Let's say that you're working on a calculator app and that, so far, you've implemented addition, subtraction, and multiplication.  Next up is division.  But, remember, you don't just open up your IDE and start hacking away at the production code.  You first need a failing acceptance test to describe the system's shortcomings.

Scenario: Regular numbers
    * Given I have entered 3 into the calculator
    * And I have pressed divide
    * And I have entered 2 into the calculator
    * When I press equal
    * Then The result should be 1.5 on the screen

This is what your test looks like.  There's a test runner that understands how to parse this English, translate it into code in your domain, and execute it.  So, at the moment you need to add the division feature, the first thing you do is describe what success looks like.  This makes it a lot easier to keep your eyes on the prize, so to speak.  So this approach isn't some kind of purist approach to process, but a refreshingly pragmatic one.

How Does It Work?

You might have noticed that the English readable text I presented is conversational-ish.  It's a bit stilted with each statement starting with "Given" or "When" or what have you.  That's because it is written in a very readable language known as "Gherkin," which is described as a "business readable, domain specific language."  Then, a test runner of sorts, known as "Cucumber," executes these acceptance tests by parsing the Gherkin and mapping it to actual code that you've written to exercise your application.

To bring the concept home a little more concretely, you have to map the English to actual methods in your acceptance test code.  So, you would have a scheme for binding the text following "Given" to a C# source method, such as via an attribute that contained the text "I have entered (.*) into the calculator."  This attribute would sit on top of a method that took an integer, x, as a parameter, and, in that method, you'd probably instantiate a Calculator and then call calculator.Press(x).

That's really all there is to it.  You write sentences in English that demonstrate to anyone interested how the system should work, and then you write code that expresses those sentences.  The result is a series of executable tests that can do a pretty good job of describing what capabilities the system has and what capabilities are currently under construction or not working.  The cynic might say that the biggest benefit is being able to tell project managers to stop asking for status and to go read the report of the test run.  The optimist would say that the biggest benefit is closing the gap between technical and non-technical stakeholders in terms of understanding the system's capabilities.

They're both right.

Come back for the next post in the series, where I'll show you how to get started doing this, from scratch, in a .NET code base.

Infragistics Ultimate 15.2 is here. Download and see its power in action!

Introducing the new Busy Indicator

$
0
0

Hey, everyone. This is Brian Lagunas and in this video, I'm going to show you how to implement some multi-threading and report the progress of that long-running process to your end users, using the new Xam BusyIndicator Control, just released with the Infragistics Ultimate 15.2 WPF controls.

[youtube] width="420" height="315" src="http://www.youtube.com/embed/-uVJ6utvRhk" [/youtube]

What we have here is an application that has one view and a view model. This view is very simple. It contains a list box that this is bound to a collection of items, and then we have a button that will kick off the process of creating those items. We are using Prism’s view model locator to set the data context of the view model, so let's take a look at the view model. As you can see, we have a property, a list of string that will represent the items that we're creating. We have our command, called start process command, that is bound to the button in our UI. When this button is clicked, the command will be invoked and we will generate ten million items to be added to this list of strings, in which our list box is bound to.

If we run the application as it is, we can see that we have our list box here, there's nothing go on. Then I'm going to click the start process button. I can see immediately our UI is frozen, I can't move anything around, I don't know what's going on, as an end user it's just stuck. When the process is complete, the UI becomes responsive again, and now I can see the items we've created. This is not what we want. What we're going to do is first we're going to create a multi-threaded environment. We're going to take what we currently have, and make it multi-threaded. We're going to use this using the new Async 08 features in dot net.

First, let's go ahead and convert this generate items method into an asynchronous method. I'm going to do that by creating a new task, I'm just going to wrap what we already have in a task, and then we have to return a task. We're going to say, "returntask.run," and then we're going to wrap our current logic inside of that task.run. The next thing we're going to do is come up to the start process method, add the async, and then add a wait just before our generate items. Now let's run this application again.

Let's see what happens when I click the start process button. As you can see, the UI remains responsive. I don't know exactly what's going on, because I clicked the button... Oh, something is happening. Apparently something was going on in the background. This is because we took that long-running process and stuck it on a separate thread, so our UI remained responsive. The next step is to let our end user know that, hey, something is happening, there is a process going on, so don't click everywhere. Be patient, and then you will see the results of that process.

To do that, we're going to modify our UI to utilize the new Xaml BusyIndicator Control that just released with our WPF controls. I go into my toolbox, I'm going to find the Xaml BusyIndicator. I'm just going to drag it into my view, then what I'm going to do is I'm going to actually wrap our current view with the Xaml BusyIndicator. In order to get the Xaml BusyIndicator to show, we have to bind the is busy property to a property in our view model. We'll assume this is called "is busy." That means we have to go to our view model, and we have to add a new property. It's going to be a bullion called "is busy." Next, we're going to set is busy to true before we start our process, and then set it to false when the process has completed. Let's run the application.

Now I'm going to start process, and now we can see something is going on. There is a process going on in the background, my UI remains responsive, but as an end user, I know that something is happening, and now that the BusyIndicator has disappeared, our results are displayed to us. Let's say I'm not happy with that animation. The BusyIndicator actually has a number of animations to choose from. It actually ships with eight, we have azure, gears, progress bar, progress ring, let's just stick with the azure. Let's go azure.

Now we have the azure animation. I'm just going to run the app, and I'm going to start the process again, and now we have a new animation. You can see that the start process button has been grayed out because the BusyIndicator is covering all the interaction points so that our end users are not confused. What's really cool is we can modify that azure animation to say, "You know what, I want the top element to be green, or the lower elements to be green, and we'll make the top ones red." We're changing the colors of the animation. As you can see, those colors are reflected when the animation is shown.

Now that we have the ability to show the BusyIndicator and play with the animation and change colors of the animation, let's go back and report the progress. Currently what we have is called an indeterminate animation. The animation keeps spinning and spinning and spinning, we have no idea when it's going to finish, we just know something's occurring. Let's say that we know how far we are into the process. We have a number of records, and we know how many records have been created. I want to report the progress of where we are at to the end user. First, let's start by reporting progress from our asynchronous operation. Let's start by creating a new I progress of T, which will be a double. We'll make it a new progress. Then, in this update progress, here, we're going to pass the percentage in order to update the UI. That means we need a new property. This is going to be a double. Then in here, we will set progress equals to the percentage. Now, calculating this percentage is even easier. Basically going to say, "If I mod, let's say 1500 equals 0, we're going to divide I by the maximum number of records to get the percentage of complete." Then we're going to take our progress and report that percentage. Now, the reason we do this is because we don't want to flood our UI with update messages. We just want to update every now and then.

The next step is to update our UI. First, let's go ahead and change our animation. We need an animation that shows progress. Let's go to progress ring. Now we want to set the property is indeterminate equal to false, because now we have a determinate state that we want to manage. Then we take the progress value and we create a binding the progress property that we're updating in the view model. Let's run the application. Start the process. Now there we can see the UI is updated with the percentage complete of the current task we are running. When it hits 100 percent, the results are displayed to our end users. That's all there is to it. As you can see, it's extremely easy to add multi-threading to your application and report the progress of that long-running process to your end users using the new Xam BusyIndicator Control now available with Infragistics Ultimate WPF controls.


ISTA 2015 | UX Design and Software Development

$
0
0

On 19 November, 2015 I have been invited to represent Infragistics at the Innovations in Software Technologies and Automation (ISTA) conference being held in Sofia, Bulgaria. I get to share the stage with some excellent local and international speakers, including Infragistics’ own Lucia Amado, who will be presenting a workshop titled, “Visual Design for Non-Designers”.

My presentation, “Design, Usability and Complex Systems”, centers on the UX design process and how incorporating it into traditional software development processes ensures both useful and usable applications. In particular, I describe the nature of technology evolution and the role of UX.

Technology Evolution

All technology begins by meeting a particular set of user needs. That functionality, however non-intuitive or difficult to use, is adopted because it’s better than what previously existed (or didn’t exist). Think about early automobiles. They were difficult to drive, unreliable and difficult to maintain – but they were better than horses (at least in certain key aspects). As users’ basic needs were met, the design of automobiles evolved to encompass technical reliability but more importantly, secondary user needs like heated seats, large-display GPS systems and trunk space sufficient to fit 2 golf bags.

system-failure

Software Evolution

Software, business applications in particular, has evolved very little. It remains difficult to use, complicated, and frustrating despite its independence from the restrictions of the physical world. Why has it been allowed to stagnate? Why are we, as daily users of poorly designed software, ok with this? The reason, I argue, is because software has been granted a “special” status. We believe that arcane and difficult interfaces are simply as good as they be. That the rules of evolution don’t apply (or that the evolutionary process has run its course).

The sad truth is, that when faced with complex systems, people internalize their difficulties, believing, to paraphrase Shakespeare, the fault lies not within our applications but within ourselves. And we creators of software share the blame. Who hasn’t worked on a new release whose goal was simply to add more functionality without considering the overall impact on people?

The Role of UX

The role of UX, therefore, becomes one of showing people – users and developers alike - a better way.  Within existing software development processes, UX works to discover not only what users want but also what they need and to then coordinate these “requirements” with business requirements and technological constraints. Within the UX Design process software evolves, not simply into an ever-increasing list of features and functions, but into applications that improve peoples’ lives in ways they could not have requested.

---------------------------------------------------------

Kevin Richardson has been working in the area of user experience for 25 years. With a PhD in Cognitive Psychology, he has deep experience across business verticals.

On the weekends, you can find Kevin on his motorcycle, racing for Infragistics Racing.

The Future of Collaboration with SharePoint and Office 365

$
0
0

Collaboration can be defined as the action of working with someone to produce something. And it’s interesting to know that it’s quite a new word:

As the above graph shows, until 1880 no one collaborated (although it might just be that they were using different words for it..)!

Collaboration in the enterprise is a very important factor in success. Working alone, people may be very good at doing one or two particular things, but by combining knowledge and qualities, work can be done easier and in less time. An all too common phrase you don’t want to hear in your organization is: “If I’d have known person X had done this before, we would have done this differently and probably succeeded”.

Collaborating should be promoted in any organization and the required tools and technologies made widely available. Technology is not the only driver when it comes to successful collaboration, but it is a key enabler.

Collaborate with SharePoint and Office 365

SharePoint is quite possibly the world’s most popular collaboration tool and is now also part of Office 365. Introduced in 2001, SharePoint delivers document and records management, intranets and extranets, workflows, and much more to the digital workplace. Currently there are two versions of SharePoint - the on-premises version and the online version in Office 365. Microsoft has recently released the public preview of SharePoint 2016, and is also constantly improving Office 365. We will look at what SharePoint 2016 will bring when it will be generally available in 2016, and what Office 365 has on its roadmap when it comes to collaboration.

Groups: The next big thing

Without a group there can be no collaboration. A group can be a business unit, a project team, or any other team of people working together to achieve something. In SharePoint, this usually transformed into a site or a subsite. While these worked fine, they required setting up a site, permissions, creating content pages and so on. Basically, it added a level of complexity, and therefore an obstacle to collaboration.

However, Microsoft has now introduced a feature called “Groups” in Office 365, which according to some people, is the next big thing. A Group can be created in a couple of clicks, and by doing so the Group’s members will get an email box and a place to store documents. And that’s it. It is very easy to start working together and do the two essential things for effective collaboration: communicating and working together on documents.

This feature is also available in separate apps for iOS, Android, and Windows Phone and is now also available in Office 2016. See here for more information.

Adding attachments made easier

The most common use of Outlook is the sending of files to colleagues and peers. And if you’ve used SharePoint for long, you should know that sending attachments is a big ‘no-no’. This is because every attachment means a new version of that file for each recipient. So, when a user sends a file to five people in their team, this means there would now be six different versions of that file.

Storing this file in SharePoint is the first step to avoiding duplication, but it still involves copying/pasting a link into Outlook to share it with other people. This has been improved in both Outlook Web App and Outlook 2016: when clicking “Add Attachment”, the user is presented with recently modified files in SharePoint. When hitting send, the recipients will see the attachment in the email as they’re used to, but the file itself is stored in SharePoint! So, there is only one version of that file.

A true hybrid search

Microsoft would probably rather like it if everyone used Office 365, but has acknowledged that there are use cases for On-Premises software as well. For example, when other internal LOB systems are used, or when data sovereignty is an issue, or when companies want full control over their software, On-Premises is the way to go. But what if a company wants both Office 365 and SharePoint On-Premises? This is called a hybrid solution. The hybrid search in SharePoint 2013 was not really hybrid as it still resulted in separate lists of search results. However, this has improved dramatically in SharePoint 2016, and it is also available for SharePoint 2013 with the August 2015 update. The cloud search service application will bring a true hybrid search experience - see this blog post for more information.

‘NextGen Portals’: more out-of-the-box templates

SharePoint has a list of built-in site templates. For example, a publishing center is a very good starting point for an Intranet, while a Team Site is the most common template for collaboration. However, these templates haven’t received major updates for a while now. And this is where the so-called ‘NextGen Portals’ in Office 365 come into play.

Next-Gen Portals offer a mobile-friendly, redesigned space to implement an information management portal or Intranet. The first examples are Office 365 Video and Delve, and more is to come. For example, the portal codenamed InfoPedia is the highly anticipated NextGen Portal for information management.

The future looks bright

With the next SharePoint version now available in public preview, a lot of new features to be released/already available for Office 365, the future for collaboration in the enterprise looks very promising. Microsoft has improved their most popular application - Outlook - by making sending attachments easier, and by adding the Groups functionality to Outlook 2016. InfoPedia will hopefully be released in the next year or so as well, delivering a ready-to-use Information Management to Office 365.

All of these features drive collaboration. And the easier that is, the more people will work together, and the higher our efficiency and rate of innovation. So, make sure you are using these features!

Looking to mobilize your corporate SharePoint? Check out Infragistics' SharePlus' demo: the ultimate mobile productivity solution for SharePoint and Office 365.

Webinar Recap: Getting Started with ASP.NET MVC

$
0
0

On November 6th we hosted a webinar titled “Getting started with ASP.NET MVC” for the Indian region and we’d like to share the presentation and recorded webinar with you now! In the webinar, we covered:

  • Getting started with ASP.NET MVC
  • Understanding Controllers
  • Understanding Views
  • Understanding Model
  • ViewBag, ViewData, and TempData
  • Strongly Typed and dynamic views
  • Child Action and Partial Views
  • Areas
  • Database first approach

You can find recording of the webinar here:

[youtube] width="560" height="315" src="http://www.youtube.com/embed/R7iksUKmfco" [/youtube]

You can also find the presentation slides here.

Some of the questions from the webinar are below:

What is the difference between ViewBag and ViewData?

ViewBag and ViewData are both used to pass data from controller to view.  Their characteristics are as follows:

ViewData is a property of type ViewDataDictionary class:

  • Data can be passed in the form of a key-value pair.
  • To read the complex type data on the view, typecasting is required.
  • To avoid exception, null checking is required.
  • Life of ViewData is restricted to the current request and becomes Null on redirection.
  • ViewData is a property of the ControllerBase class

ViewBag is a property of dynamic type.

  • Data is passed as a property of the object.
  • There is no need of typecasting to read the data.
  • There is no need of null checking.
  • Life of ViewBag is restricted to the current request and becomes Null on redirection.
  • ViewBag is property of ControllerBase class.

What is a Child Action?

Child Actions are the action methods which can be invoked within the view.

  • This is used to work with the data in the view, which are not related to the main action method
  • In ASP.NET MVC any action can be used as a child action
  • To use an action only as a child action and attribute it with the ChildActionOnly. It will make sure the action is not called by any user request and will only be used in the view.

What are the different types of Views?

Different types of Views are:

  1. Dynamic view
  2. Strongly typed view

Is it a must to use Entity Framework in MVC applications?

No, it is not required - you can use any database technology to work with databases in the MVC application, including but not limited to:

  • LINQ to SQL
  • Entity Framework
  • ADO.NET
  • Any ORM

Once again, thank you so much for your interest in our webinars – and we look forward to seeing you at a future webinar!

Jitter - Another Solution to Overplotting

$
0
0

Back when I discussed tricks for coping with overplotting I omitted (at least) one popular "solution": jittering the data. Jittering, is the process of adding random noise to data points so that when they are plotted, they are less likely to occupy the same space. It is most commonly used when the data being plotted is discrete. In such cases, in the absence of jitter, it's not just that the edges of data-point markers that overlap, the markers actually sit perfectly on top of each other. No amount of reduction in the size of the data point can remove this problem.

While jitter may be added to points in, for example, a box plot, it's most frequently used in 2D scatter plots. Chart A in the graphic below shows a contrived example dataset with no attempt to deal with overplotting. Both the x and y variables only take integer values. The dataset actually contains 2000 points, but there are only 780 unique points. Chart B shows the same data but with the addition of jitter. Specifically, for each point a random number drawn from the continuous uniform distribution between -0.5 and 0.5 is added to the x coordinate and another random number drawn from the same distribution is also added to the y coordinate. As noted previously, another potential solution is to make points semi-transparent (chart C) and, of course, these two options can be combined (chart D).

What conclusions are there to be drawn from these plots? Because there are only 780 unique pairs of values, 1220 (61%) of data points are completely obscured in chart A. With the addition of just jitter each point becomes unique, but there is still some degree of overlapping of the dots used to represent them. Making the points translucent certainly helps show that there are more than the 780 points visible but it's not always an acceptable option.

Because translucency and other alternatives, 2D histograms for example, aren't always acceptable solutions it might be worth thinking about what other issues can arise with jittering data. One key concern may be that of integrity. If we move the points away from their "true" positions, are we deliberately distorting the data? While chart A above may seem like the more "correct" way to plot the data, chart B is better at showing approximately where most of the data is. In chart A, all points are in exactly the right place but they are not all equally representative of the distribution of data; one visible dot can mark the position of anything from 1 to 12 data points. Without translucency or color or something else there's no way of knowing which is which.

Despite this, I'd like to point out once again that you should consider your audience. Will they be confused by non-integer values being plotted for something they know can only be integer-valued, for example? What about points at the extremes that in one dimension are no longer even in the permissable range?

I think it's also worth studying some real-world data that will look familiar if you've been reading my other articles here. The GIF below shows a 2D histogram of RGB image data from an 8-bit png image (precise details and the image from which it is extracted can be found here). As the animation progresses the length of the uniform distribution from which jitter values are drawn (the "Jitter Extent") increases in both dimensions. Here the use of jitter does allow us to see more details about which of the value pairs occur most frequently. Because the data remains in square blocks, there is still the sense of there only being a modest number of discrete values in the underlying data.

In that previous article I also looked at the distribution of blue and green values for low- and high-quality JPEGs of the same initial image. The GIF below shows the effect of adding jitter to these. Aside from the points spilling out beyond the confines of the axes (which looks weird if nothing else), the clear differences between the two scatter plots dissapears as the Jitter Factor increases. This is highly undesirable.

As with many things in data visualization, there's no clear answer to the simple question: "Should I jitter data points?". Jitter can help clarify where the bulk of the data lies but it can also distort important patterns. Where appropriate I prefer to use translucency, but sometimes — e.g. when the color of points already tells us something important — that isn't an option.

Bring high volumes of complex information to life with Infragistics WPF powerful data visualization capabilities! Download free trial now!

It’s never too late to learn how to code

$
0
0

“We've arranged a global civilization in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces

- Carl Sagan

 If Science writer Carl Sagan’s prophesizing seems a little melodramatic, we still agree with the sentiment. According to eMarketer's stats, more than 2.5 billion people – over one third of the global population - will own smartphones by 2018. However, while they’ll be carrying computer power in their pockets that only 30 years ago would have required a room, the proportion who actually understand how their mobile works will remain incredibly low.

 Why’s this a problem? From fraud and hacking to Artificial Intelligence taking your job, if you don’t understand how computers work, you expose yourself to a lot of risks. On a more day-to-day level, an understanding of code is a little like understanding how your car’s engine works. It can save you a lot of time and money when dealing with the problems it throws up. And of course, there’s also a lot of benefits to learning to code. While you may not become the next Zuckerberg or Wozniak, coding is fun. The basics aren’t as hard as you might think, and there are a lot of resources out there to help you!

 The benefits of learning code

A lot of people take one look at a page of HTML and think “that’s not for me”. There’s no doubt that learning to write code can feel pretty daunting. If you weren’t a grade A math student or found computer science tough at school, myriads of dashes, dots, slashes and brackets can look pretty alien. But don’t let that put you off. There’s method to the madness and with a little bit of determination you’ll soon experience some real advantages:

  • Better understanding of how IT works

If you’ve ever wondered at exactly how Google Maps finds where you and your Smartphone are, the answer is (basically) code! Every single computer program, web page and mobile app you’ve ever used is, of course, based on code. So, as you learn the basics, you’ll begin to have a much better understanding of the technology you use every single day.

  • Undergo brain training

Writing code is essentially about solving problems by using logic. Once you’ve learnt the basics of a programming language, coding requires you to turn that knowledge into a problem solving tool. Few other activities require such a pure approach to problem solving – it’s both mentally challenging and really engages all your brainpower to find solutions.

  • Find solutions to niggles

Following on from the previous point, coding can be used to solve problems in your everyday life. You don’t need to be a full-time professional to set up your own little programs that automate boring day to day tasks. From updating spreadsheets to automating folder creation, you can create programs which will save you time on a daily basis. These can often be completed in a couple of hours.

  • Improve your CV

In so many professions, a knowledge of programming languages will make your CV stand out. While it’s increasingly common for people to have some coding knowledge, it’s still pretty rare. So, even if the job isn’t purely development oriented, if your next career move requires use of the Internet and software (i.e. basically every job), your knowledge will make your CV stand out from the pile.

I get it, so what next?

So, if you’ve decided that coding is for you, how should you go about learning to do it?

 1.     Get your knowledge of computer science up to scratch

If RAM, CPU and algorithms all sound a little intimidating, we’d recommend reminding yourself of the basics of computing. With those building blocks in place, you’ll find telling a computer what to do a lot easier.

 2.     Free, part time courses

There is a huge range of free, online courses out there. CODE or Code Academy are decent places to start. If you’re looking for information on a specific solution, it’s almost certainly out there (and quite likely on GitHub). 

 3.     Paid online courses

Sites like Coursera offer paid (but affordable) courses on a lot of topics, including computer science – often from some of the best universities in the world. Paid courses mean you have a stronger guarantee of quality and usually offer some sort of certificate – making them great for your CV.

 4.     Attend evening classes or even go back to school

We’d definitely recommend using the online options first – not only will they save you cash but most importantly help you decide if you actually enjoy writing code. However, if you want to take things more seriously, attending classes is a good option. You’ll be given assignments and will have the support of a professional instructor and peers.

Build full-featured business apps for any browser with Infragistics ASP.NET controls! Download Free Trial now!

Viewing all 2398 articles
Browse latest View live