Quantcast
Channel: Infragistics Community
Viewing all 2363 articles
Browse latest View live

What is Empathy and Why Designers Need to Understand It

0
0

Titanic & Monkey Business. Part 1 of Empathy Series.

Empathy is said to be a core tool in user experience. You might even refer to it as “Innovation’s pixie dust.” That makes it seem pretty important, and it is. Because it’s so important, we need to understand what empathy is and how it works. This will provide a context for, and enrichment of, our professional work.

What is empathy according to psychology?

“Trying to gain weight can often feel like an uphill battle.” — said Sympathy (and almost no one struggling with obesity).
“I know it’s not easy to lose weight because I have faced the same problems myself.” — said Empathy.

By definition, empathy is the ability to share someone else’s feelings or experiences by imagining what it would be like to be in that person’s situation.

Empathy is the what makes movies great and brings in huge box office grosses. If we take the audience’s point of view, it’s what makes so many people react the same way to a movie, like crying their eyes out watching Titanic.

Titanic-like movies trigger the emotional part of empathy, tapping into an emotion that we’ve felt before and allowing us to experience it again in the movie theater. But there is another part of empathy known as “cognitive empathy”. Cognitive empathy helps us understand why people act the way they do. It allows us see things from their perspective.

We can often see the results of empathy written in the facial expressions of others. But what is going on behind the scenes?

How is empathy manifested physiologically?

In the 90s, scientists conducting an experiment involving primates noticed an interesting phenomenon, later defined as a result of mirror neuron activity.

Quick description: Monkey is wired to a device. Scientists are observing the firing of certain neurons in its brain while the monkey grabs a banana. At lunchtime, a hungry scientist grabs a banana in sight of the monkey. The device shows that same neurons that fired when the monkey was reaching for the banana fire when the monkey observes the experimenter reaching for the banana.

Empathy initially gaining recognition as monkey business.

The neurophysiological evidence suggests that the monkey was provided with a vicarious experience — the very definition of empathy.

Not surprisingly, mirror neurons are also found in humans, in areas of the brain that account for spatial recognition, control of movement, language processing, touch, and pain processing.

While there is certainly more to the physiology of empathy, I am not a specialist in that area and won’t have much more to say about the gooey gray matter that links us with (at a minimum) the rest of the primate world. I do, however, have something to say about empathy and zombies ;-) but you’ll have to wait for my second blog about empathy for that!


ReportPlus Desktop Release Notes – Volume Release 1.0.1231.0

0
0

ReportPlus Desktop Version 1.0.1231.0 is here with the following enhancements and fixes:

Enhancements

  • Exporting dashboard as .rplus file now prompts the user to include their local data files into .rplus file

Fixed issues

  • Widget cannot be maximized after flipped to grid
  • The selected theme in the themes chooser is not correct
  • MySQL® data source editor suggests wrong default port 1433 as the correct one is 3306
  • Opening a widget with Facebook data source in edit mode does not show selected account and selected metric period
  • Widget General Settings help navigates to incorrect location
  • Sync progress message shows incorrect information about the number of the items being synced
  • After opening a dashboard shared with me the Rename option become visible in dashboard's thumbnail context menu
  • Back button returns to dashboard selection instead of minimizing the widget
  • Bar chart markers are too big on cultures with "," decimal separator
  • Incorrect values are shown for dynamic tabular global filer

Build 1.0.1231.0 - Aug-25-2016

Selecting and Deploying a Self-Service Data Visualization & Data Discovery Tool

0
0

After the buzz around “Big Data” being the future of the BI in the enterprise gradually dying down, the focus has moved towards the world of data visualization. While Big Data of course holds considerable value – allowing companies to glean new insights and make more informed decisions – it needs a standardized framework in order for us to gain a real understanding of it, especially on a large scale. Data visualizations are the method for expressing the insights found within Big Data.

“…progressive organizations today are using a wide array of data visualization (dataviz) tools to ask better questions of their data – and make better business decisions.”

Phil Simon

What the wide array of tools author Phil Simon is referring to is an area that is continuing to grow. One of the biggest variables between current tools in the market is whether they’re self-service tools: that is, individual users without technical experience or know-how can make use of the tool without the need for external assistance (providing they have the data!). For business intelligence (BI) tools, self-service functionality is a big selling point; offering users an added level of control, flexibility and customization over their work. Before you’re able to reap the rewards of self-service analytics software, however, you need to consider two important areas – the selection process and the deployment process. In this post, we’ll explore how you should go about both of these; what to consider and why.

Choose wisely

Like any kind of selection process, it’s vital to do your research. Both Big Data and data visualization have increased in popularity massively over the past couple of years, as Simon recognizes:

“IBM, Cognos, SAS and other enterprise BI stalwarts are still around, but they are no longer the only game in town. Today, an organization need not spend hundreds of thousands or millions of dollars to get going with dataviz. These new tools have become progressively more powerful… …they have made it easier than ever for employees to quickly discover new things in increasingly large datasets.”

As there is indeed more competition than ever, we should perhaps refine ‘research’ to ‘extensive research’ – there’s a lot to choose from, each tool offering their own impressive features. The good news is; we can get you started right now! Below is a checklist of some of the best features that current Business Intelligence tools have to offer:

DIY

No prizes for guessing this one! As we’ve already mentioned, the self-service aspect allows users to work on creating visualizations and dashboards in their own time, whilst still matching the functionality and capabilities of non-self-service tools. While more and more tools are realizing the benefits of self-service, it’s not a guarantee, so make sure you know if the tool you’re considering has self-service capabilities and a wealth of data visualization types to choose from.

The more, the merrier

Aside from the benefits of being able to work at your own, individual pace; the ability to collaborate with your coworkers is something to keep an eye out for as well. For that reason, BI tools that accommodate sharing functionalities should earn priority. Through sharing dashboards and visualizations that update in real time – whether its via email, dedicated apps or otherwise – teams can make data-driven decisions confidently together. Some tools also allow for exporting content to Microsoft PowerPoint, Word and PDF formats with annotations for additional comments and insight.

Gather data from anywhere

As we generate more data than ever, the locations for where it’s all stored are multiplying. Top-tier tools will let you connect to your data whether On-Premise, in local files or in the Cloud – be it OneDrive, Google Drive, Dropbox etc. Some companies should also look out for tools that support a range of data sources, such as SQL, Salesforce and Microsoft Dynamics CRM.

There are, of course, many other features that available business intelligence tools have to offer – some exclusive, some common. As such, we suggest analyzing what exactly you want to get out of a data visualization tool, and deciding what would be the best fit for your company.

Deployment

Once you’ve found the right tool for you, the next stage is deploying it across your business. Like any deployment process, there are series of questions you should ask to ensure it will be effectively and efficiently deployed.

What about your users?

Before you begin the deployment process, you need to understand where it is you’re deploying to. As in, how will users be accessing these tools? More modern tools include mobile apps for users to work on and share their data whilst on-the-go. Think about their needs (and wants!).

On-Premises or Cloud?

This is a decision that companies have either recently made or should be making in the near future, as the Cloud plays an increasingly bigger role in the enterprise. Each deployment method has its own merits; however most major tools are more lenient towards the power of the Cloud. In terms of scalability and storage, the Cloud holds a definite advantage, but for those who list security as the number one priority, an On-Premises server may be the more appealing option.

Are we prepared?

One of the best advantages of self-service analytics is the fact that non-technical users need not be put off, as user-friendly UI’s make navigation and creation simple and straightforward. This means that virtually no user training is required, and instead users can jump straight in with the software.

What a data to be alive

Offering users the means to showcase their insights and findings in striking fashion, there is no better time to invest in the power of data analytics. ReportPlus gives you the power of data analytics combined with the flexibility of self-service software. It is a cloud-based or on premise data visualization service that allows you to visualize the metrics that matter most to your business in one place. Monitor the most important KPI’s and know the health of your business with real-time dashboards, create rich interactive reports and access data on the go with ReportPlus apps for iOS, Windows and Android, or on the Web.

To bring self-service data visualization and BI to your organization, try ReportPlus free for 30 days!

What are Closures in JavaScript?

0
0

A JavaScript closure is a function which remembers the environment in which it was created. We can think of it as an object with one method and private variables. JavaScript closures are a special kind of object which contains the function and the local scope of the function with all the variables (environment) when the closure was created.

To understand closures, first we need to understand SCOPING in the JavaScript.  We can create a variable or a function in three levels of scoping,

  1. Global Scope
  2. Function or local scope
  3. Lexical scope

I have written in details about scoping here, but let’s take a brief walkthrough of scoping before getting into closures.

Scopes in JavaScript

As soon as we create a variable, it is in a Global Scope. So, if we have created a variable which is not inside any function, it is in a global scope.

var foo ="foo";
console.log(foo);

If something is in a global scope, we can access it anywhere – which makes it our friend and enemy at the same time. Putting everything in a global scope is never a good idea, because it may cause namespace conflicts among other issues. If something is in a global scope, it could be accessed from anywhere and if there variables named the same in a function, this can cause conflicts.

Any variable or function which is not inside a global scope is inside a functional or local scope. Consider the listing below:

function foo() {var doo ="doo";
        console.log(doo);
    }

 foo();

We have created a variable doo which is inside function scope of the function foo. The lifetime of variable doo is local to function foo and it cannot be accessed outside the function foo. This is called local scoping in JavaScript.  Let us consider code listing shown as diagram below:

Here we have created a variable in the function foo with the same name as a variable from the global scope, so now we have two variables. We should keep in mind that these variable are two different variables with their respective life time. Outside the function, variable doo with value a is accessible, however inside the function foo, variable doo with vale doo exists. For reference, the above code is given below:

var doo ="a";function foo() {var doo ="doo";
        console.log(doo);//print doo
    }

    foo();
    console.log(doo);//print a

Let us tweak the above code snippet a bit as shown in listing below:

var doo ="a";function foo() {

        doo ="doo";
        console.log(doo);//print doo
    }

    foo();
    console.log(doo);//print doo

Now we do not have two scopes for the variable doo. Inside the function foo, the variable doo created in the global scope is getting modified.  We are not recreating the variable doo inside foo, but instead modifying the existing variable doo from the global scope.

 

While creating the variable in the local or functional scope, we must use keyword var to create the variable.  Otherwise, the variable will be created in the global scope, or if the variable already exists in the global scope, it would be modified.

In JavaScript, we can have a function inside a function. There could be nested functions of any level, means we can have any number of functions nested inside each other.  Consider the listing below:

function foo() {var f ="foo";function doo() {

            console.log(f);
        }

        doo();
    }

    foo();//print foo

Essentially in the snippet above, we have a function doo which is created inside function foo, and it does not have any of its own variables. Function foo creates a local variable f and it is accessible inside function doo. Function doo is the inner function of function foo and it can access the variables of function foo. Also, function doo can be called inside the body of the function foo. Function doo can access the variables declared in the parent function and this is due to the Lexical Scoping of JavaScript.

There are two levels of scoping here:

  1. Parent function foo scope
  2. Child function doo scope

Variables created inside the function doo have access to everything created inside the scope of function foo due to the Lexical Scoping of JavaScript. However function foo cannot access the variables of function doo.

Closures in JavaScript

Let us start understanding Closures in JavaScript with an example. Consider the listing as shown below. Instead of calling the function doo inside the body of the function foo, we are returning function doo from function foo.

function foo() {var f ="foo";function doo() {
            console.log(f);
        }return doo;
    }var afunct = foo();
    afunct();

 

In the above listing:

  1. function foo is returning another function doo;
  2. function doo does not have any of its own variables;
  3. due to lexical scoping, function doo is able to access variable of parent function foo;
  4. function foo is called and assigned to a variable afunct;
  5. then afunct is called as a function and it prints string “foo”

 

Surprisingly, the output of the above code snippet is string “foo”. Now we might get confuse: how is variable f accessed outside the function foo? Normally, the local variables within a function only exist for the duration of that function's execution. So, ideally after execution of foo, variable f should no longer be accessible. But in JavaScript we can access it, because afunct has become a JavaScript Closure. The closure afunct has information about function doo and all the local scope variables of function doo at the time of the afunct closure creation.

 

In case of closure, the inner function keeps the references of the outer function scope. So, in closures:

  • The Inner function keeps reference of its outer function scope. In this case, function doo keeps reference of function foo scope.
  • Function doo can access variables from function food scope reference at any time, even if outer function foo finished executing.
  • JavaScript keeps the outer function’s (foo in this case) scope reference and its variables in memory until an inner function exists and references it. In this case, the scope and variables of function foo will be kept in memory by JavaScript, until function doo exists.

To understand closure better, let us discuss one more example:

function add(num1) {function addintern(num2) {return num1 + num2;
        }return addintern;
    }var sum9 = add(7)(2);
    console.log(sum9);var sum99 = add(77)(22);
    console.log(sum99);

we have two closures here, sum9 and sum99.

 When closure sum9 got created, in the local scope of function addintern value of num1 was 7, and JavaScript remembers that value while creating sum9 closures.

Same in case of closure sum99, in the local scope of function addintern value of num1 was 7, and JavaScript remembers that value while creating closure sum99.  As expected output would be 9 and 99.

We can think of a closure as an object with private variables and one method. Closures allow us to attach data with the function that work on that data.  So, a closure can be defined with the following characteristics:

  • It is an object
  • It contains a function
  • It remembers the data associated with the function, including the variables of function’s local scope when the closure was created
  • To create a closure, the function should return another function reference

 Finally, we can define a closure:

“JavaScript Closure is a special kind of object which contains a function and the environment in which function was created. Here, environment stands for the local scope of the function and all its variables at the time of closure creation.”

 Conclusion

In this post, we learnt about Closures in JavaScript. I hope you find it useful. Thanks for reading!

UX Plagiarism | Stealing Someone’s Work

0
0

With the upcoming US presidential election, plagiarism has been in the news of late. Yet, when we discuss plagiarism, we typically focus on the individual(s) who committed the act. How horrible they are! How lazy! How unprofessional! All these things are true but I want to focus for a moment on the individual(s) against whom the act was committed.

In the creative field that is User Experience Design, our work defines us. It is the end result of our training, experience, creativity and hard work. We have our processes, true, but what we tend to sell are our results. The finished product. Whether that product is a complex business application, a mobile app, or a corporate website, our deliverables are our “intellectual property” (even if our clients, technically, own them). They remain ours even after we release them into the world. We never truly walk away.

Until now, I’ve never known how it feels to have had my work plagiarized. And I’m not referring to someone using my work as a springboard toward creating something unique. Nope. I’m talking about straight-up stolen.

My consulting organization within Infragistics, where I serve as Director of UX, has had, since 2013, its own website (http://d3.infragistics.com/). We use it to describe our design philosophy, post client testimonials and blogs, recruit new employees…the typical stuff. We put a lot of time and effort into the creation of that site and continue to put in effort to maintain and update it.

And then a company called Apex Infosystems stole it. Google them and see for yourself. If you happen to read this after their site has been taken down, take a look at the screen shots below.

I can tell you that it feels dirty. And yes, the lawyers have been alerted but that doesn’t make me feel any better. There are no excuses for passing someone else’s work off as your own. In the academic and corporate worlds, there are rules and immediate consequences to plagiarism. In this case, I’ll likely need to hope for a little karma (or Newtonian physics, if you prefer). Either way, it makes me angry and a little sad.

 

Infragistics’ Landing Screen

a

 

Apex Infosystems’ Landing Screen

8-26-2016 10-48-49 AM

 

Infragistics’ Services Screen

b

Apex Infosystems’ Services Screen

8-26-2016 10-50-00 AM

 

Infragistics’ Process Screen

d

Apex Infosystems’ Process Screen

8-26-2016 10-51-27 AM

 

---------------------------------------------

Kevin Richardson has been working in the area of user experience for 25 years. With a PhD in Cognitive Psychology, he has experience across business verticals in the fields of research, evaluation, design and management of innovative, user-centered solutions.

On the weekends, you can find Kevin on his motorcycle, racing for Infragistics Racing at a number of different racetracks on the East coast.

How Mobility Can Improve Process Efficiency for Your Company

0
0

Imagine you’re a sales rep and you’re out on the road a lot. It’s the Eighties and the way you do business is processional between the office, meeting your clients, returning to the office to input the various paperwork and data manually and repeat: you’re back on the road. Now imagine having the first Carphone, before anyone else. You can suddenly call ahead to your appointments; you can do your phone deals while you’re driving to meet a different client; you can change an appointment if some better opportunity presents itself; you can get a colleague in the office to input your reports and data as you make your next appointment. In essence, the Carphone has given you the gift of flexibility and your levels of production skyrockets as a result. More sales for you and your company, better bottom line, and you help your company streak ahead of the competition because you’re the only one with the phone.

Today’s Carphone takes the form of a smartphone or tablet and innovative mobile apps based in the cloud. And If you can make your business more efficient in the way your employees can get their work done, then that is going to have a majorly positive outcome on your IT services ROI and, ultimately, your bottom line. And one of the best ways for you to get more from your teams is giving them the flexibility to work when and where they want through mobile working.

Let’s talk shop

The marriage of collaboration, communication and technology have been key to better business practices ever since Alexander Graham Bell invented the telephone, and it’s as true today as it was back then. If there is a common thread to these innovations, it’s in the evolution of our ability to communicate. From the telephone, to email to the cell phone, to IM from your tablet or mobile. With the tech curve as fast as it is, the difference between a business thriving and failing is its ability to adapt to the changes in the way they do business.

One of the major benefits of business mobility is being able to remain connected no matter where you are. Whether that means out of the office, on-site, working from home, or from the train or airport gate on the way to a meeting, having access to your documents, seeing your colleagues’ edits and comments and being able to contact them instantly from anywhere means you can effectively keep out in front of your work and increase your productivity.

Simple economics

Gartner – the technology research experts – predict that by the end of 2017 the demand for enterprise mobile apps will outweigh the available development capacity by five to one. That’s quite a stat. How we view this information is important. On the one hand you could, perhaps, say that the advances in technology are slowing down— (they’re not); or, as the report goes on to say, that enterprises find it a challenge to rapidly develop, deploy and maintain mobile apps to meet increasing demand, as it is exceedingly difficult and costly to hire developers with good mobile skills.

You could (and should) look at it slightly differently: this demand means that if your business is not already planning a mobile, cloud strategy, you are at risk of being left behind by your competitors, big and small. Also, according to Gartner, employees today across the digital workplace use an average of three different devices in their daily routine which will increase to five or six devices as the Internet of Things makes its way into the mainstream.

What are your priorities?

It’s becoming clearer that mobility in business is an important step for every organization going forward. So, when asked how ‘mobility’ can improve process efficiency for your company, the immediate answer should be: you need to keep up with the pace of change or risk falling irreversibly behind. Sounds kind of scary. But beyond the need to keep up, bringing mobility to your business processes will also:

Enable superior, customized collaboration— your employees and teams can keep in close contact, work together, bring valued opinions and sign off work from anywhere, at any time, in a way that suits them.

Make best use of your data— making use of the information you collect on a daily basis is one of the best ways to make informed business decisions that will have a positive effect on your bottom line. Gain a better understanding of your customers, as well as how your employees work best.

Work securely— set and specify your own policies to your methods of work. Ensure policies are in line with your business practices, philosophy and compliance responsibilities. 

SharePlus has collaboration and productivity at its core

SharePlus is just one of the mobile tools offered by Infragistics to companies that are demanding complete and effective solutions for giving their business processes a mobile dimension and strategy. To try it free for 30 days, sign up for a demo today.

For more information on how SharePlus can help your institution transition in the modern way of working anywhere, anytime, visit the SharePlus site, or contact us today.

Top 10 Industries Benefiting from Big Data and Analytics

0
0

There is a lot of noise on the Internet about Big Data: it’s the next big thing, it’s going to change how you work, it’s going to disrupt everything. There’s certainly a lot of truth in the idea that Big Data will have a big impact, but all the hype can feel a little abstract. So, in order to give you a clearer idea of how data insights and predictive analytics are actually changing industries, we’ve drawn up a list of the kinds of businesses that are benefiting from data the most.

Until very recently, most companies simply didn’t have the tools or the know-how to analyze and explore the data they were collecting. Even if they could collect broad sales figures and customer information, drilling down into that data and really getting insights from it was time consuming and required a data scientist with a PhD. However, an explosion in data analytics tools has changed the game, and now anyone with a little training can create powerful visualizations from company data and use these to reroute company strategy.

So, which industries are now benefiting from data insights and most importantly, how?

1. Travel

The travel industry has always depended on treating statistics to provide the best possible service. Using data to predict when people will travel, where and how means companies can provide the exact service their customers need at the best time and at the right price.

How is data being used?

Say you run a train company. By using historical data collected on customer journeys you can predict when there will be higher and lower demand for fares. Of course, train companies have always done this to a degree, but predictive analytics can help you dig down into even greater detail and give you the edge over competitors in a tight market.

2. Energy

The energy industry needs to find a constant balance between providing the right amount of energy. Too much and you lose profit, too little supply and your customers will find another provider fast.

How is data being used?

Most power plants have a fairly good idea of when demand is higher and lower. This is no secret, but using data insights can help make energy provisions even more efficient and significantly cut costs. Again, by studying historical demand, power plants can predict minute-by-minute, hour-by-hour energy demands depending on anything from the season to time of day, then use this to provide the exact quantity of energy required.

See a sample Building Management energy consumption dashboard as an example.

3. Insurance

The insurance industry has always depended on math to calculate insurance costs. However, this usually depended on the history of the client in particular and other internal data sources.

How is data being used?

Imagine you provide home insurance. Traditional insurance would be calculated on risk based on crime statistics, credit scores and loss histories. However, by using more powerful data analytics tools you can incorporate an even wider array of sources to build an even more specific picture of risk-related to one customer in particular.

Check out this Insurance Enrollment Analytics sample dashboard.

4. Finance

Finance has always been about numbers, but complex algorithms that can collect data from an ever wider number of sources help inform and support trading decisions.

How is data being used?

Use live and historical data feeds to alert yourself to new opportunities faster than humans can read, and discover new opportunities while gaining a competitive edge.

You can find some sample Finance dashboards here.

5. Agriculture

Agriculture is defined by fine margins. Being able to predict variables such as crop prices, pesticide quantities and the health of livestock will help farmers develop a much clearer picture of their expected costs and losses year on year.

How is data being used?

To cut waste, farmers can use data and predictive analytics to have a better idea of exactly how much food will be required to ration out feed to livestock. By providing animals with the correct amount of food – neither too little nor too much – farmers will be able to save considerable sums and reduce risk while raising healthy animals.

6. Health

Providing the right healthcare at the right times is essential, and so being able to analyze large and up to date datasets to discover trends in the population can help provide better support for public safety.

How is data being used?

Data can be used to analyze long term trends – such as aging populations in advanced economies – and help policy makers and practitioners re-orientate their skills and methods to the needs of a different kind of patient.

7. Mining

With the success and failure of mining companies so dependent on the unpredictable value of raw materials, data analytics can be very useful in cutting costs and providing more long term security.

How is data being used?

By using data to better plan their logistics, mining companies can improve their preparation for delivery of their wares from the ground to the buyer.

8. Education

Education is one of the largest markets in the world, yet educators have often failed to see how data can help them provide better and more appropriate services to students. 

How is data being used?

When students move from one classroom to another and meet different teachers throughout the day, it can be hard to keep track of an individual student’s progress. However, numerous apps are using data collected in school to provide teachers with a more unified insight into their students’ academic progress and allow them to spot problems and provide additional support when needed.

9. Telecoms

Telecom companies have access to a huge amount of customer data, and so by using tools to analyze this they can provide even more personalized services that users actually want.

How is data being used?

In the past, providing telecoms was relatively straightforward – you connected a customer to the network and allowed them to contact their friends, relatives and business associates. However, with the emergence of the Internet and ever more devices for communicating, telecom providers need to offer much more diversity in the services they offer. Data analytics can help them with this by segmenting the market ever more accurately and providing the exact deals different customers will want.

10. Retail

No industry embodies the basic elements of supply and demand better than retail. Data has always been used to understand how customers are buying, but data analytics will help this become even more accurate.

How is data being used?

Internet of Things shelf scanners are increasingly able to tell stores how empty or full their stocks are. Retail data analytics will then allow stores to always provide the exact amounts of product needed.

Keen to explore the power of data analytics in your industry? Try ReportPlus today to discover and present new solutions for your business.


How to locate a particular object in a JavaScript Array

0
0

Have you ever come across a requirement to find a particular object in a given array of objects? In this post, we will explore various ways to find a particular object in a JavaScript array. Let us assume that we have an array as shown in the listing below and we need to find whether an object with an Id of ‘4’ exists:

var tasks = [
                { 'Id':'1', 'Title':'Go to Market 99', 'Status':'done' },
                { 'Id':'2', 'Title':'Email to manager', 'Status':'pending' },
                { 'Id':'3', 'Title':'Push code to GitHub', 'Status':'done' },
                { 'Id':'4', 'Title':'Go for running', 'Status':'done' },
                { 'Id':'5', 'Title':'Go to movie', 'Status':'pending' },
    ];

To search a particular object, we will use the Array prototype find method. This returns a value on a given criterion, otherwise it returns ‘undefined’. It takes two parameters, one required callback function and an optional object, which will be set as value of this inside the callback function.

  1. The callback function will be called for each element of the array until the given condition for a particular element is not true.
  2. An object which will be the value of this in the callback function is an optional parameter, and if it’s not passed, the value will be set to undefined in the callback function.

The callback function parameter of the find method takes three parameters:

  1. element: the current element being processed in the array
  2. index: the index of the current element being processed
  3. array:  the array on which the find method is being called

Let us say we have a callback function as shown in the listing below. It will print the current element, index of the element, and the array:

function CallbackFunctionToFindTaskById(element, index, array) {

        console.log(element);// print element being processed 
        console.log(index); // print index of the element being processed 
        console.log(array); // print the array on which find method is called 

    }

How does the find method work?

  • The JavaScript find method will execute the callback function for each element of the array. So if there are 5 elements in the array, the callback function would be executed five times.
  • The JavaScript find method will break the execution of the callback function when it finds a true criterion for a particular element.
  • If the given criterion is true for an element, the JavaScript find method will return that particular element, and will not execute the callback function for the remaining elements.
  • If the criteria are not true for any elements, the JavaScript find method will return undefined.
  • The JavaScript find method does not execute the callback function for indexes which are either not set or have been deleted.
  • The JavaScript find method always executes the callback function with three arguments: element, index, array.

 Let us see some examples of using the find method!

 Find an object on a fixed criterion

We have a tasks array as shown in the listing below:

var tasks = [
                 { 'Id':'1', 'Title':'Go to Market 99', 'Status':'done' },
                 { 'Id':'2', 'Title':'Email to manager', 'Status':'pending' },
                 { 'Id':'3', 'Title':'Push code to GitHub', 'Status':'done' },
                 { 'Id':'4', 'Title':'Go for running', 'Status':'done' },
                 { 'Id':'5', 'Title':'Go to movie', 'Status':'pending' },
    ];

We want to find an object with an Id of ‘4’. We can do that as shown in the listing below:

function CallbackFunctionToFindTaskById(task) {return task.Id ==='4';
    }var task = tasks.find(CallbackFunctionToFindTaskById);
    console.log(JSON.stringify(task));

In the above listing, we are passing the callback function CallbackFunctionToFindTaskById in the find method of tasks array.  Always first parameter of the callback function represents element parameter. Here task is representing element inside the callback function. So, task represents the element currently being processed.

In the callback function, we are checking the Id of the current task and if it is equal to 4, returning the task.  In this scenario criteria is fixed inside the callback function.

 Find an object on criteria passed in the callback function

In the previous example, we had a fixed criterion that returned any object with the Id of ‘4’. However, there could be a requirement in which we may want to pass the criteria while calling the callback function. We can pass an object as the value of this in the callback function. Let us consider the same tasks array again, which is shown in the listing next

var tasks = [
                { 'Id':'1', 'Title':'Go to Market 99', 'Status':'done' },
                { 'Id':'2', 'Title':'Email to manager', 'Status':'pending' },
                { 'Id':'3', 'Title':'Push code to GitHub', 'Status':'done' },
                { 'Id':'4', 'Title':'Go for running', 'Status':'done' },
                { 'Id':'5', 'Title':'Go to movie', 'Status':'pending' },
    ];

Next let us create a callback function FindTaskById as shown in the listing below:

function FindTaskById(task) {        
        console.log(this);

    }

As you notice we are printing the value of “this” inside the callback function. Next we’ll pass the FindByTask callback function in the find method of tasks array as shown in the listing below:

var task = tasks.find(FindTaskById,['4','67']);

In this case, the value of this inside callback function has been set to an object with two values: 4 and 67. In the console, you should get the output as shown below:

The value of this is printed 5 times because there are 5 elements in the tasks array. To return an object with the Id set to 4, we need to modify the callback function as shown in the listing below

function FindTaskById(task) {if (task.Id ===this[0]) {return task;
        }
    }var task = tasks.find(FindTaskById, ['4', '67']);
    console.log(JSON.stringify(task));

In the callback function, we are passing the value of this object with the first property set to 4. Hence checking whether the task.Id is equal to this[0] or not will return an object with Id 4.

Conclusion

In this post, we learned about the JavaScript Array find method and various options with the callback function. Having a better understanding of the find method is essential to be a more productive JavaScript developer and I hope you enjoyed reading!


Undercover Testability Killers

0
0

If you were to take a poll of software development shops and ask whether or not they unit tested, you’d get varied responses.  Some would heartily say that they are, and some would sheepishly say that they totally mean to get around to that next year and that they’ve totally been looking into it.  In the middle, you’d get a whole lot of responses that amounted to, “it’s complicated.”

In my travels as a consultant, I witness the reason for this firsthand.  The adoption rate of automated testing has increased dramatically in the last decade, and that increased adoption means that a lot of shops are taking the plunge.  And naturally, this means that a lot of shops with a lot of legacy code and awkward constructs in their codebases are taking the plunge, which leads to interesting, complicated results.

“It’s complicated” generally involves variants of “we tried but it wasn’t for us” and “we do it when we can, but the switch hasn’t flipped yet.”  And, at the root of all of these variants lies a truth that’s difficult to own up to when talking about your group – “we’re having trouble getting any good at this.”

If this describes you or folks you know, take heart, though.  The “Intro to TDD” and “NUnit 101” guides make it look really, really easy.  But those sources of learning usually show you how to write unit tests for things like “in-memory calculator,” intending to simplify the domain and code so that you understand the mechanics of a unit test.  But, in doing this, they paint a deceptive picture of how easy covering your code with tests should be.

If you’ve been writing code for years with nary a thought to testing at the unit level, it’s likely that familiar, comfortable coding practices of yours are proving to be false friends.  In other words, your codebase is probably littered with things that are actively making your life extremely difficult as you try to adopt automated testing.  What follows are some of the most common ones that I see.

Busy Constructors

When you’re writing unit tests, there’s a pretty simple and minimalist pattern.  You instantiate an object, arrange it for the conditions you want to test, do the thing you’re testing, and then verify that the result is what you expected.  Busy constructors threaten to trip you up right out of the gate, on thing one.

If the constructor is executing many lines of code, that means many lines of code that can fail.  Are you passing the wrong argument to it?  Is something inside of one of the objects that you’re passing to it not setup correctly?  Is the constructor instantiating something that’s blowing up?  Is it expecting a global variable to have a value that it doesn’t?

Any of these problems results in a unit test that blows up and requires debugging.  This is not only frustrating, but wholly confusing when you’re trying to figure out the particulars of testing in the first place.  Your busy constructors are testing headaches.

Global State

Speaking of testing headaches, global state is a huge source of testing problems.  Global variables (public static variables that can be mutated) are the most overt example of this and one that I mentioned in the last section, but there are other forms as well.  The singleton design pattern and service locator patterns are basically global variable repositories, and static methods that encapsulate state have the same effect as well.

The main problem with global state, from a testability perspective, is that it creates hidden dependencies that will not be obvious to you.  If you’re going to write tests for something called “CustomerOrder” that has a parameter-less constructor, you might want to instantiate it and then assert that it has a single line item after you add a line item to it.

Imagine your surprise if, when you’re instantiating it, you get an exception telling you that you have a bad connection string.  Oh, well, that’s because the order class refers to a database singleton that, as part of its initialization, reaches out and connects to the database using something defined in an app config file and stored in a global variable.  Oops.  Good luck setting all of that up for a unit test.

Lazy Loading

Another pattern I see that correlates with hard-to-test codebases is an affinity for lazy loading.  I understand the attraction of this pattern, as someone who can appreciate a good abstraction.  You get the best of two worlds: not incurring a performance hit before it's absolutely necessary and not burdening clients of your code with the implementation details.

But on the flip side lies a problem.  Hiding those details from clients means also hiding them from people trying to test the code – people to whom “is this going to take a millisecond or 5 minutes” matters a great deal.  Lazy loading is typically reserved for operations that take a lot of time, and operations that take a lot of time typically do so because they do things like talk to databases, access files, or call out to web services.  People trying to test your code are now faced with the conundrum of “when I run this code from my unit test, it will either behave normally or it will try to talk to a database somewhere, and I’m not really sure which.”

This type of thing fares poorly in unit tests.  So if you’re trying to test code that makes use of lazy-loaded constructs, there’s a pretty good chance you’ll wind up banging your head against your desk.

External Access In-Situ

The last source of difficulty that I’ll mention is what I think of as “in-situ” access to things outside of your application’s space in memory.  This might mean reading from files, talking to a database, getting input from a driver, etc.  In applications that lend themselves well to testability, these types of activities are localized to specific places at the edge of your application to minimize dependence on them.

In hard-to-test codebases, however, they seem to just kind of happen wherever they’re needed.  Need to know what a config setting is in the InvoicePeparer class?  Well, just read it in right there from the config file.

While that may seem innocuous, you’ve murdered the testability of the method in question.  Before you put that in, testing that bit of the logic would be no problem.  But now, your unit test suite (and the one on the server) depends on some file existing in some specific place on the disk in order to have any chance of passing.  Now you’ll wind up with a test that fails all the time or sporadically, and both of those create frustration and lead to deleted unit tests and abandonment of the effort.

Make It Easy on Yourself

Starting to unit test is hard.  It means figuring out a new skill, obviously, but what fewer people realize is that it tends to mean starting to reason differently about your code.  That’s a lot on your plate already, so it’s important to understand when you’re making life hard on yourself.  And, if you’re doing the thigns that I mentioned, you’re making life hard on yourself.

This doesn’t mean that you have to change all of your practices or go on a massive re-work effort in your codebase.  That’s not reasonable in the face of real delivery pressure.  It just means that you should pick your battles, particularly in the beginning.  Test things that are actually testable, and you’ll save yourself considerable heartburn.

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.

 

Infragistics/Nintex partnership enables efficiency everywhere

0
0

Roles today have evolved — there are not as many assembly line workers, but the information worker is here to stay. As businesses race to become digital, team collaboration is more critical than ever before.

Sales and marketing teams, field service representatives, or even traditionally office-bound teams like human resources, finance, and information technology, must be able to keep up with their work no matter where they are. In fact, IDC estimates there will be 105.4 million mobile workers in the US, or nearly three-quarters of the workforce by 2020. The modern workforce wants — and often needs— to be mobile, but employers struggle to enable workers to work where they want, how they want, when they want.

As people make this shift to a digital workforce, companies have developed many tools and processes to get the job done, but many of these rely on traditional networks and connectivity; these desktop-bound tools won’t work when you’re on the go.

Put simply: disconnected apps mean disconnected experiences.

Consider how a typical person might want to work today. She wants to use the best-in-breed for each function, which might mean using Microsoft Office for word processing and spreadsheets, and Nintex for workflows and business process. Documents might be saved on a local network — or in the cloud on SharePoint sites, in OneDrive, Google Drive, and DropBox, and she may need to access and aggregate data from Excel, Salesforce or Microsoft Dynamics CRM and Google Analytics, in order to visualize the metrics that allow her to spot opportunities and areas of concern in real-time.

That’s a lot of data from a lot of sources, with a lot of variety in online / offline accessibility. Which frequently causes…a lot of challenges.

Nintex, the leader in workflow and content automation, and Infragistics, a worldwide leader in providing mobile collaboration solutions, are partnering to solve these challenges. We’re doing this by seamlessly integrating Nintex Workflow and Forms with Infragistics’ SharePlus. Nintex Workflow and Forms automate processes on and between today’s most used enterprise content management systems and collaboration platforms, connecting on-premises, cloud workflows, and mobile users; SharePlus is a mobile collaboration and productivity application for Microsoft SharePoint on-premises and Office 365. Together, these solutions enable teams to use a SharePlus Mobile Workspace to run business processes, analyze data, and work on documents both on-premises, or in the cloud, natively from their iOS devices.

The experience for users — regardless of their location or connectivity — is remarkable. They simply select the process they want to start within the SharePlus Mobile Workspace, then the Nintex Mobile app automatically launches, showing the associated form.

Then, once the form is completed and the process is initiated, the user is automatically returned to SharePlus.

Current users of Nintex web-based forms for Office 365 and SharePoint are also able to render and complete associated tasks within SharePlus.

This means sales reps can carry all of the latest messaging for their products in a tablet (as opposed to needing a large briefcase), and still present the information to their customers offline when they can’t get a signal.

Similarly, a field service worker can see current inventory status on a required part, reserve it, and get the process started to make a follow-up appointment for their customer—without having to call the office.

And a hiring manager can review job requirements and candidates’ resumes while in the air, selecting who to meet, even without using unreliable internet on a plane.

With access to documents, data, and workflows within one secure native mobile app, Infragistics and Nintex help today’s workers become more efficient.

  • Keep Work Flowing: Give your teams a single mobile workspace to collaborate anywhere in the world from their iOS device. Run business processes, read and react to your documents on-premises or in the cloud, and analyze your data to make smarter decisions in real-time. 
  • Customize Your Workspace: Adapt SharePlus with Nintex Workflow to match the way your teams work. Create a branded mobile experience right out of the box. 
  • Enable Enterprise Security: SharePlus is MDM ready, allowing teams to be productive without compromising privacy or security.

“When we began to talk to Nintex about the possibility of this relationship, we quickly realized our shared common vision of helping business teams become more efficient,” said Chris Sullivan, Director, WW Alliances and Channel Sales at Infragistics. “We have a core set of common customers, and are excited to bring the solutions together to meet their needs.”

“Nintex automates business processes to make work easier for all,” said Nintex Technology Partner Evangelist,  Eric Harris. “As a Nintex Certified Technology Partner, the collaboration with Infragistics to enhance the mobile experience and help enterprises realize the immediate benefits of automating workflows within the context of their SharePoint content aligns with our goals for driving value across departments for greater efficiencies and results.”

Both existing Nintex and SharePlus customers can connect with their current solution providers to learn more about the benefits of deploying SharePlus with Nintex Mobile. The joint solution is available immediately.

Check out a free trial today at www.infragistics.com/shareplus/Nintex.

Conduct a Pilot Test First

0
0

Before beginning a user research study or usability test, one of the most important things you can do is to run a pilot test. What’s a pilot test? It’s a rehearsal of the study with someone standing in as the participant. You run the session as if it was with a real participant, but you throw away the data. The purpose is to see if there are any problems with your technique and also to get some practice with the procedures involved. Afterwards, you can make changes to your technique and discussion guide.

MobileTabletTesting4

What Problems Do Pilot Tests Find?

Pilot tests help you find problems in your research plan and methods, including:

  • Do the participants understand what you’re asking them to do?
  • Does the phrasing of tasks give away the answer?
  • Are your questions or tasks biased?
  • Do your questions or tasks elicit the kind of information you need?
  • Are there repetitive or unnecessary questions or tasks that can be eliminated?
  • Can the tasks be completed in the prototype, or are certain screens, states, or interactions missing?
  • How long does the session take?

Choosing the Pilot Test Participant

For some studies, you can use just about anyone as a pilot test participant, such as a coworker, friend, or relative. For example, if you’re doing a pilot usability test of a general, consumer-facing website, such as a travel booking site, you could use just about anyone. They don’t have to have any special knowledge or experience.

But for some tests, it’s best to get a pilot test participant who closely fits the profile of the actual participants you’ll be using. For example, if you’re conducting usability testing of software used by tax accountants, it’s ideal to get a pilot test participant with the domain knowledge that’s needed to use that application. Even for something that seems simple, such as a study of people using a car buying website, it’s best to have a pilot test participant that fits the user profile of people who are considering buying a car in the next three months. If your questions have a lot to do with the motivation of car buyers and how they use the site, you’ll get very different answers from someone who doesn’t fit the profile. In these situations, if you use a pilot participant who doesn’t fit the profile, you won’t get a good sense of whether the questions are relevant, produce interesting discussion, or how long the session will take.

Can’t Your First Session Be the Pilot Test?

If you don’t specifically conduct a pilot test, then your first session becomes a pilot test. That’s where you’ll find the problems with your method, the tasks, and the questions. The problem with waiting until your first participant to find the problems, is that you won’t have much time to make changes before the next session. You may be able to make minor changes in the brief time between the first and second session, but it will be too late to make major changes.

The Ideal Pilot Test

Ideally, conduct the pilot test at least a day or two before the first session. Recruit an extra participant, who matches all the screening criteria, to perform the pilot test. If everything works out well, you can keep that person’s data. If you find problems, you can correct those before the official research sessions begin.

Test New Research Activities

Pilot testing is always useful, but it’s especially important when you’re going to try a new technique. For example, I once conducted a group workshop in which we had the participants divide into small groups to illustrate their work processes on post-it notes. We found that it was too difficult for them to do this together as a group activity. They ran out of time, and it wasn’t the best experience. For subsequent workshops, we had each participant work individually to illustrate their own work process, which was much easier and more effective. In effect, our first workshop was successful as an unintended pilot test. However, it would have been much better if we had planned it as a pilot test with a smaller group of participants.

Why Pilot Tests Are Important

You never quite know how well a research study will go, until the first session. It’s much better to work out the kinks in a pilot test than in the first session. Sure, you can sometimes get away skipping the pilot test, and everything works out fine. But it’s those times when things go wrong, which could have been found earlier, that you’ll wish you had conducted a pilot test first.

 

Image courtesy of K2_UX by Creative Commons license.

ReportPlus Desktop Release Notes – Volume Release 1.0.1275.0

0
0

ReportPlus Desktop Version 1.0.1275.0 is here with the following enhancements and fixes:

Enhancements:

  • Sync favorite dashboards across devices
  • Sync recent dashboards across devices
  • Raise data rows limit from 50K to 100K

Fixed Issues:

  • R+ Desktop does not display result in calculated field for dashboard created in iOS
  • The Fraction Digits formatting setting is not applied to Pie, Doughnut and Funnel visualizations' labels
  • Sign Out and General Settings are hidden when the cloud storage is not allowed in the configuration
  • Vertical Bar Charts can't display all data labels correctly when there are negative values
  • Clicking a minimized widget when in edit mode maximizes the widget instead of selecting it
  • An exception is thrown upon trying to drill in a chart with a time aggregation
  • Broken links to documentation topics

Build 1.0.1275.0 - Sep-27-2016

Webinar Recap: Performing CRUD Operations in AngularJS 2.0

0
0

Earlier this month we hosted a webinar about “Performing CRUD operations in Angular 2” for the India region, and we’d like to share the presentation and recorded webinar with you now! In the webinar, we covered everything you need to know to perform CRUD operations in Angular 2, including:

  1. Routing
  2. Components
  3. Service
  4. Using http, Rx, Observable
  5. Data binding
  6. Form and Input validation

You can view the recording of the entire presentation here:

[youtube] width="560" height="315" src="http://www.youtube.com/embed/l34z53cxq4w" [/youtube]

You can also view the presentation slides here.

Once again, thank you so much for your interest in our webinars – and we look forward to seeing you at a future webinar!

Infragistics Windows Forms Release Notes – September 2016: 15.2, 16.1 Service Release

0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find these notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.


Windows Forms 2015 Volume 2 Service Release (Build 15.2.20152.2118)

Windows Forms 2016 Volume 1 Service Release (Build 16.1.20161.2088)

 

How to get the latest service release?

Infragistics ASP.NET Release Notes - September 2016: 15.2, 16.1 Service Release

0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last Service Release for ASP.NET 15.2

Download the Release Notes

ASP.NET 2015 Volume 2

ASP.NET 2016 Volume 1


Ignite UI Release Notes - September 2016: 15.2, 16.1 Service Release

0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last Service Release for Ignite UI 15.2

Download the Release Notes

Ignite UI 2015 Volume 2

Ignite UI 2016 Volume 1

What Content and Data Would a Professional Services Specialist Take on the Go

0
0

The fundamental goal of those working in professional services is to assist clients through the managing and improving areas of their business, usually offering customized, knowledge-based solutions. Successful professional services depend on the expertise of individuals, meaning services cannot be standardized – instead, profitability is created through face-to-face interaction with clients. So, the ability to continue working when moving from location to location can be a great advantage to those working in these organizations.

The right one for the job

Professional services firms are profitable only when their team members are able to work billable hours, and so incoming work is often assigned to whoever is currently available. This method maximizes revenue in the short term, but can often lead to a decline in quality and client service. This is because employees will have a specialist area of work – tax fraud cases, for example – and as such would, understandably, be the first port of call if a new tax fraud case came through. If that worker is already involved in another case, however, it will be assigned to someone else who is available, who may not be as well-equipped to deal with the complexities of tax fraud cases. Keeping everyone productive is paramount, but scheduling the ‘wrong’ staff members can have a negative impact on client satisfaction.

It is situations such as these where mobile working can showcase real benefits, and is why global firms such as Deloitte, PwC, McKinsey and EY have employed their own mobile apps on Android and iOS for employees and clients alike to use.

Mobile matters

Let’s take a law firm, for instance. The day-to-day work of different lawyers will vary according to their areas of expertise, but at the heart of the majority of business transactions are contracts. A standard merger and acquisition (M&A) transaction would generally be broken into four stages, each of which can be improved through mobile technologies:

1.   Internal client discussion

A client will identify a potential acquisition, such as an IT company acknowledging a potential relationship with an app developer. The client is keen to proceed, but first wants clarification from the management team. What are the inherent risks, assets and liabilities? How does the IT client ensure it can capitalize on the opportunity? This is where external, professional advice is sought after.

The power of mobile: Create a project immediately while engaging initial client discussions and avoid losing key pieces of information ‘in translation’. Share project dashboards with clients to set up a method for communication as soon as first contact is made.

2.   Client seeks preliminary advice

The law firm would:

  • Draft confidentiality agreements to ensure the client doesn’t misuse any sensitive information coming from either party.
  • Review constitutional documents, company searches and contracts so the client can know whether consent is required from shareholders.
  • Review financial information and contracts and advise on whether the transaction requires consent from competition authorities.
  • Provide advice on tax risks and structuring.

The power of mobile: Collaborate with colleagues using the power of document management platforms such as Microsoft SharePoint. Create, edit and comment on documents to make the review process as thorough as possible, even when workers are away on different projects. Content can remain secure while mobile with encryption, permission and application level policies.

3.   Decision on whether to proceed

As a result of the law firm’s advice, the client decides to proceed, which spawns in-depth research into any risks and liabilities that will enable the client to know whether they want to commit, what they are prepared to pay and what form the contact will take. As a result, a sale and purchase agreement (SPA) is mocked up and intensive negotiations can begin.

The power of mobile: The SPA is a meticulous process, with a lot of time and care required to ensure everything is correct before a deal can be determined. Remote sync capabilities make sure only the most up-to-date documents and files are shared, while still allowing individuals to make changes as they please without interruption. With distance between employees often being a factor, remote sync capabilities can prove invaluable.

4.   Completion

Once checks are all run and given the all clear, documentation is in order and a price is agreed upon, all that’s left is to arrange the share transfer.

Before this can be done, however, a completion meeting – either in-person or through email – must be held. This process may involve coordinating many different parties in different jurisdictions. Documents like board minutes, shareholder resolutions, share transfers and certificates all need to be gathered and regulated, any last-minute hitch can sink the entire deal.

Post-completion, the law firm and client have time for relationship-building. After getting to know each other during negotiations, this is the time for securing potential future work.

The power of mobile: Open messaging channels between lawyer and client greatly reduce a lapse in communication at the last hurdle. Maintaining a positive relationship post-completion is also made easier, increasing the likelihood of potential future work.

Take on the power of mobile

SharePlus, from Infragistics, allows users to create a customized mobile workspace for professional services teams. By ensuring team members on client sites or working remotely can easily manage any size project, you can improve the quality of client engagement and have a positive impact on their level of satisfaction.

Through the searching, sharing and collaboration of relevant content, as well as offline access and remote synchronization capabilities, projects can be managed more efficiently with less hassle.

By providing estimation and planning resources, tracking key events and deliverables and planning resource allocation and billable vs. non-billable hours, SharePlus can provide a flexible mobile solution to empower those working in the professional services industry.

For more information on how SharePlus can help you, contact us today or visit the Infragistics site.


UXify Bulgaria 2016

0
0

UX Architect Jason Caws, Senior UX Architect Stefan Ivanov, UX Art Director Spasimir Dinev, UX Art Director Andrea Silveira, and VP of Product Management, Community, and Developer Evangelism Jason Beres, will be attending the UXify Bulgaria 2016 conference on October 14 and 15 in Sofia. Infragistics’ presentations range from the practical use of interactive prototypes, including workshops on learning how to use Infragistics’ interactive prototyping tool, Indigo Studio to building a case for UX to a free screening of the film, “Design Disruptors.”

Building simple multilingual ASP.NET Core website

0
0

Introduction

In this tutorial we will create new multilingual website in ASP.NET Core and publish it to IIS. Version 1.0 of ASP.NET Core was released in June 2016, so it’s quite new tool. Main feature of it is that we can develop and run our apps cross-platform on Windows, Linux and Mac. Today we’re going to concentrate on Windows. ASP.NET Core contains some differences compared to ASP.NET MVC 5, so it’s a good idea to start with something simple and our website, which consist of two webpages, both in three languages, it’s good offer to start with.

Creating .NET Core environment on Windows

To start with ASP.NET Core, we need to have Visual Studio 2015 with Visual Studio Update 3 installed. Skip this step, if you already installed both of them. If not, you can get Visual Studio Community for free here: Visual Studio Community 2015 Download and Visual Studio Update 3 here: Visual Studio Update 3 Download. During installation of Visual Studio Community 2015, just select the default installation.

Installing .NET Core 1.0 for Visual Studio and .NET Core Windows Server Hosting

Now we need to install .NET Core 1.0 for Visual Studio and .NET Core Windows Server Hosting, so we will be able to build and publish our website. You can get .NET Core 1.0 for Visual Studio here: .NET Core 1.0 for Visual Studio Download and .NET Core Windows Server Hosting here: .NET Core Windows Server Hosting Download.

Creating a website

If we already have installed all these necessaries, we can proceed to create a new website. To do so, open Visual Studio 2015, go to File/New/Project and choose Visual C#/Web/ASP.NET Core Web Application (.NET Core). Name it NETCoreWebsite (fig. 1).
Figure 1 – creating template for ASP .NET Core application

In the next window we need to choose type of template together with type of authentication. In our case, it will be respectively Web Application and No authentication. Host in the cloud option should be unchecked (fig. 2).


Figure 2 – choosing right template and type of authentication


New ASP.NET Core project has been just created. Moreover, we can display it in our web browser. To do so, click on IIS Express button on the navigation bar. After few second default website should appear in our web browser. We can switch between all items on navigation bar.
Figure 3 – default website made while creating new ASP.NET Core project

Adding webpages and static files

Now we move on to create our own website. All directories and files which will be mentioned in this tutorial are placed in src/NETCoreWebsite in Solution Explorer.
First of all, we should remove unnecessary files. To do so, go to Views/Home and delete all three webpages placed there. After that go to wwwroot/images and delete all images that directory contains.
Now it’s time to add our webpages to project. Go to Views/Home, right click on it and choose Add/New item (fig. 4).
Figure 4 – adding new webpage to project


In new window choose .NET Core/ASP.NET/MVC View Page. Name it FirstWebpage.cshtml (fig. 5).
Figure 5 – adding new webpage to project, continuation

Our webpage has just been created. Repeat that step for SecondWebpage.cshtml.
Now we’re going to fill .cshtml files we’ve created in last step with HTML code. IMPORTANT: those.cshtml files should contain only content of <body> tag without declaration of shared elements (like navigation bar or footer), references to CSS files, fonts and <script> tags. <body> tags shouldn’t be included as well.
It’s time to add static files like images, CSS or JavaScript to our project. Few steps ago we deleted unnecessary images from wwwroot/images. Now we’re going to add our images right to this directory. Right click on it and choose Open Folder in File Explorer (fig. 6). That will open images directory in File Explorer and now all we have to do is simply copy our images here. NOTE: Remember to add “~/” at beginning of every image path.
Figure 6 – opening directory in File Explorer to easily add new items to it


In very similar way we can add CSS and JavaScript files. We just have to add them to wwwroot/css or wwwroot/js.
ASP.NET Core using MVC pattern, which means that Controllers are responsible for display our Views to end users. To display our webpages, we need to edit HomeController.cs placed in Controllers directory.
In HomeController.cs delete methods About() and Contact(). Then copy Index() method and paste it just below original Index() method. After that change “Index” to “FirstWebpage” in first method and to “SecondWebpage” in second method. Those methods only return View which allow to display our webpages in browser. After complete this step, our HomeController.cs class should look like this:
publicclassHomeController : Controller
    {
        publicIActionResult FirstWebpage()
        {
            return View();
        }
        publicIActionResult SecondWebpage()
        {
            return View();
        }
        publicIActionResult Error()
        {
            return View();
        }
    }

Go to Startup.cs class and find method called Configure. In method body we will find code similar to this:
app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
Change {action=Index} to {action=FirstWebpage} so our chosen webpage will be display by default.
In next step we will add references to our CSS and JavaScript files and extract shared files.

Extracting shared files

To extract shared files, we have to edit _Layout.cshtml file. A layout page contains the structure and shared content of website. When a web page (content page) is linked to a layout page, it will be displayed according to the layout page (template).
The layout page is just like a normal web page, except from a call to the @RenderBody() method where the content page will be included.
Open _Layout.cshtml. As we can see, our <head> tag and references to CSS and JavaScript files are defined right here.
Let’s start with CSS. Find environment called Development. Note that there are two environments called like this, one in <head> and one in <body>. We want to add reference to CSS file, so of course we’re going to change code of <head> Development environment.
In Development we will find <linkrel="stylesheet"href="~/css/site.css"/>. We want to add reference to our own CSS file, so we need to simply change site.css to name of our CSS file.
We need also to add reference to font that would be used on our website. Add <linkrel="stylesheet"href="https://fonts.googleapis.com/css?family=Open+Sans:400,600"/> just below CSS files reference.
It’s very important to copy contain of Development environment to Staging,Production environment (it can be found just below), so we will be able to use added references after publishing our project.
Now we will add reference to JavaScript files.
In declaration of <body> find environment called Development. There we will find line looks like this:
<scriptsrc="~/js/site.js"asp-append-version="true"></script>
Just like before, all we have to do is to change site.js to name of our JavaScript file. In that case, it would be popups.js.
Again, copy content of Development environment to Staging, Production environment.
Last point of editing _Layout.cshtml is to define shared elements of our website. In our case, it will be navigation bar and footer, which are the same for both webpages. We just need to replace default navigation bar code with our own navigation bar code and repeat same step for footer.
If it has been done, we can click IIS Express button and display our website in browser.

Using resources to localize website

Using recourses is quick and easy way to localize our website. You can read more about it here: Globalization and localization.
Before we add any resources, we need to implement a strategy to select the language for each request. To do so, go to Startup.cs and find method called ConfigureServices. Replace method body with code like below:
      services.AddLocalization(options => options.ResourcesPath = "Resources");
      services.AddMvc()
        .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix)
        .AddDataAnnotationsLocalization();
We added the localization services to the services container and set resources path to Resources (we will create that in a moment). It will also allow us to base localization on the view file suffix.
We want to localize our website in three languages: English, Polish and German. Default language is English. In Startup.cs find method called Configure and add to it code like below:
var supportedCultures = new[]
            {
                newCultureInfo("en-US"),
                newCultureInfo("pl-PL"),
                newCultureInfo("de-DE")
            };
            app.UseRequestLocalization(newRequestLocalizationOptions
            {
                DefaultRequestCulture = newRequestCulture("en-US"),
                SupportedCultures = supportedCultures,
                SupportedUICultures = supportedCultures
            });

At the very beginning of Startup.cs add following code:
using System.Globalization;
using Microsoft.AspNetCore.Mvc.Razor;
using Microsoft.AspNetCore.Localization;
Now we will add resources to localize our website. Right click on src/NETCoreWebsite in Solution Explorer and choose Add/New Folder (fig. 7). Name it Resources.

Figure 7 – adding new directory which will contain our resource files


After that, right click on Resources directory and choose Add/New Item (fig. 8).
In new window choose .NET Core/Code/Resources File. Name it Views.Home.FirstWebpage.en.resx (fig. 9).
Figure 9 – adding new resources file

Resources file for FirstWebpage.cshtml in English language has just been created. Repeat this step for Polish and German language (remember to change en to respectively pl and de). After that we should have 3 resources files in our Resources directory.
Now we need to create resources files for SecondWebpage.cshtml. Repeat above step three times (for each language). Remember to change FirstWebpage to SecondWebpage in name of the resources file and to change suffixes.
We need also to create resources files for _Layout.cshtml. As you could notice, name of resources file is path to proper .cshtml file plus language suffix. Because _Layout.cshtml isn’t placed in Home directory, but in Shared directory, our resources file name for English language will be Views.Shared._Layout.en.resx. Repeat this step for Polish and German language.
We can move on to localize our website. Add following code at the very beginning of FirstWebpage.cshtml, SecondWebpage.cshtml and _Layout.cshtml:
@using Microsoft.AspNetCore.Mvc.Localization
@inject IViewLocalizer Localizer
To localize any string in our code, we need to replace chosen string in .cshtml file with @Localizer["String or it’s ID"]. It’s good practice to replace short sentences and one word strings with @Localizer["String"] and long sentences with @Localizer["ID"]. For example, if we want to localize Contact us, we should write @Localizer["Contact us"], but if we want to localize This tutorial will teach you building and publishing your multilanguage website on Windows using ASP .NET Core, better write @Localizer["About tutorial"].
Let’s assume that we used @Localizer["About tutorial"] in our code. To translate it to other language, open proper resources file, in Key write About tutorialand in Value translated sentence. That’s all.
We can choose proper language in navigation bar of our website. To make it working, we need to code buttons like this in _Layout.cshtml:
<li><ahref="?culture=pl-PL">PL</a></li>
<li><ahref="?culture=en-US">EN</a></li>
<li><ahref="?culture=de-DE">DE</a></li>



    Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics JavaScript/HTML5 controls.







Infragistics WPF Release Notes – September 2016: 15.2, 16.1 Service Release

0
0

Release notes reflect the state of resolved bugs and new additions from the previous release. You will find these notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in PDF, Excel and Word formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last Service Release for Infragistics WPF 15.2.

In order to download release notes, use the following links:

WPF 2015 Volume 2 Service Release (Build 15.2.20152.2212)

WPF 2016 Volume 1 Service Release (Build 16.1.20161.2134)

Viewing all 2363 articles
Browse latest View live




Latest Images