Quantcast
Channel: Infragistics Community
Viewing all 2398 articles
Browse latest View live

Easily extend your IDE with Extensibility feature for Visual Studio 2015

$
0
0

 

Visual Studio 2015 is a very advanced IDE with a great number of useful options, but sometimes you may find that there are not enough features to meet your needs. Certain operations may be even more automated or you might prefer to have more project types or languages supported. Sounds familiar? If so, there is an easy way to deal with such situations.

A while back, there were Macros and Add-Ins to make our IDE more developer friendly and meet our needs. In Visual Studio 2015, those ways of extending the environment are not supported anymore; instead there is Visual Studio Extensibility (VSX or VSIX). This possibility was released first in the 2005 version and now is mature enough to give us a great experience with building our own plugins.

Thanks to the Extensibility feature we can extend menus and commands, create customized tool windows, editors, project templates, extend user settings, properties in the Property Window, create Visual Studio Isolated Shell apps and many more. Basically the only limitations are our needs and imagination. In this article we will give you a brief look at the capabilities of Visual Studio Extensions.

To start with extensions development we need to have Visual Studio 2015 with the Visual Studio SDK. You can get Visual Studio Community for free here: Visual Studio Community 2015 Download. During installation, just select Visual Studio Extensibility Tools and it will be installed together with other parts of the IDE.

If you already have Visual Studio 2015 installed just open it, go to File/New Project and expand Visual C#/Extensibility. Choose Visual Studio Extensibility Tools and follow the instructions.

MSBuild extension (automated custom build)

Today we are going to make a simple but still very useful extension which will let us use a custom *.proj file to build solutions using MSBuild. First let’s create a VSIX Project. To do so, go to File/New/Project, and then choose Extensibility/ VSIX Project and specify the name. In this case it will be CustomBuilder (fig.1).

Figure 1

Visual Studio has generated a Getting Started Visual Studio Extensions help page. You can delete these files because you don’t need to use them (unless you’d like to read it). Delete index.html and stylesheet.css but keep source.extension.vsixmanifest– we’re going to use it later.

Add Package

Now we need to add a package, by adding new item to our project (fig.2).

Figure 2

And in the Add New Item window select Visual C# Items/Extensibility/VSPackage and create new Visual Studio Package. Name it CustomBuilderPackage.cs.(fig 3)

Figure 3

Add Command

We need one more item in the project – command. To add it, complete the same steps as for package, but in the Add New Item window choose Custom Command instead of Visual Studio Package. The name will be CustomBuildCommand.cs (fig 4)

Figure 4

Loading package after opening solution

We want our package to be available only while a solution is opened. To restrict the user from using the option, add attribute ProvideAutoLoad to the CustomBuildPackage class and add the following code before the class definition (fig 5).

[ProvideAutoLoad(Microsoft.VisualStudio.Shell.Interop.UIContextGuids.SolutionExists)]

public sealed class CustomBuildPackage : Package

{

    //...

}

Figure 5

 

In CustomBuildPackage.vsct set the command flag to DefaultDisabled in tag: Button (fig. 6):

<Buttons>

 

  <Button guid="guidCustomBuildPackageCmdSet" id="CustomBuildCommandId" priority="0x0100" type="Button">

 

    <Parent guid="guidCustomBuildPackageCmdSet" id="MyMenuGroup" />

 

    <Icon guid="guidImages" id="bmpPic1" />

 

    <CommandFlag>DefaultDisabled</CommandFlag>

 

    <Strings>

 

      <ButtonText>Invoke CustomBuildCommand</ButtonText>

 

    </Strings>

 

  </Button>

 

</Buttons>

Figure 6

 

Command button layout and shortcut key

To customize the appearance of our command you can specify some layout options in CustomBuildPackage.vsct file. Change <ButtonText> tag to set the text that will be displayed in the Tools menu.

Icon

You can also add an icon to distinguish your plugin. First add <GuidSymbol> tag to

<GuidSymbol name="cmdIcon" value="{9194BBE0-78F3-45F7-AA25-E4F5FF6D10F9}">    <IDSymbol name="commandIcon" value="1" /></GuidSymbol>

Figure 7

 

To generate a number to the value attribute, open the Tools menu and select Generate GUID. Now click 5th format, Copy generated GUID, Exit the tool and paste it to your code (fig. 7).

Figure 8

You have to specify which picture you want to use. Put a *.png file in Resources folder under your project directory (the icon should be 16x16 px) and add this code to <Bitmaps> section.

<Bitmaps>  <Bitmap guid="cmdIcon" href="Resources\commandIcon.png" usedList="commandIcon"/></Bitmaps>

 

Now change <Icon> tag to refer to the right file:

<Icon guid="cmdIcon" id="commandIcon" />

 

Shortcut Key

To make the tool more user-friendly you can add a shortcut key for it. All you need to add is 3 lines of code just before the <Symbols> tag:

<KeyBindings>  <KeyBinding guid="guidCustomBuildPackageCmdSet" id="CustomBuildCommandId" editor="guidVSStd97" key1="1" mod1="CONTROL"/></KeyBindings>

 

Where key1 specifies an alphanumeric key or VK_constants, mod1 is any key from: CTRL, ALT, and SHIFT.

Now our command should look like Fig. 9.

Figure 9

MSBuilder logic

The extension looks like it supposed to, but it does nothing yet. To make it work, add the following class:

using System.IO;namespace CustomBuilder

{

  public class CustomMsBuilder

  {

    private string solutionPath;    public string msBuildLocation

    {

      get

      {

        var currentRuntimeDirectory = System.Runtime.InteropServices.RuntimeEnvironment.GetRuntimeDirectory();        return System.IO.Path.Combine(currentRuntimeDirectory, "msbuild.exe");

      }

      private set

      { }
    }

    public CustomMsBuilder(string solutionPath)

    {

      this.solutionPath = solutionPath;

    }

    public string BuildSolution()

    {

      return FormatOutput(StartMsBuildWithOutputString());

    }

    private string StartMsBuildWithOutputString()

    {

      var outputString = "";      using (var customBuilder = GetMsBuildProcess())

      {

        var standardOutput = new System.Text.StringBuilder();        while (!customBuilder.HasExited)

        {
          standardOutput.Append(customBuilder.StandardOutput.ReadToEnd());
        }
        outputString = standardOutput.ToString();
      }

      return outputString;

    }

    private System.Diagnostics.Process GetMsBuildProcess()

    {

      var startInfo = new System.Diagnostics.ProcessStartInfo(msBuildLocation, solutionPath);      startInfo.RedirectStandardOutput = true;      startInfo.UseShellExecute = false;      return System.Diagnostics.Process.Start(startInfo);

    }

    private string FormatOutput(string processedOutput)

    {

      string solutionName = Path.GetFileName(solutionPath);      var header = "CustomBuilder - Build " + solutionName + "\r\n--\r\n";      return header + processedOutput;

    }
  }
}
 

This class runs the MSBuild.exe process, builds the opened solution from the custom *.proj file (if the project contains any), formats and redirects output of the MSBuild.exe process, which can displayed it to user from our extension.

The constructor of the class accepts a string with solution’s path and stores it in a field, so it can read it later in a suitable method.

The public method BuildSolution() gets the right MSBuild path (using getter of a property msBuildLocation), starts the msbuild.exe process with the solution path as a parameter, reads the output from console (using string builder), and returns formatted result – with a header "CustomBuilder – Build - <solution name>”.

After the CustomMSBuilder class is finished, it should be called from the CustomBuildCommand. In CustomBuildCommand you have to update the callback function as shown above:

 

Add a using:

using EnvDTE80;

 

 

Change the callback name:

private CustomBuildCommand(Package package)

{

     //...       var menuItem = new MenuCommand(this.CustomBuildCallback, menuCommandID);

       commandService.AddCommand(menuItem);
     }
}

 

Change the callback function and add an additional one:

private void CustomBuildCallback(object sender, EventArgs e)

{

      var cMsBuilder = new CustomMsBuilder(GetSolutionPath());      var outputMessage = cMsBuilder.BuildSolution();

      WriteToOutputWindow(outputMessage); //displays the output – we’ll create this method in next step

}

public string GetSolutionPath()

{

      DTE2 dte = ServiceProvider.GetService(typeof(SDTE)) as DTE2;      return dte.Solution.FullName ?? "";

}

 

Output

We can display the result in the output window (the same way Visual Studio informs whether the build solution succeed or not). In order to do that, add the following code to the CustomBuildCommand.cs file (just below existing methods).

private void WriteToOutputWindow(string message)

{

    var pane = GetOutputPane(PaneGuid, "CustomBuilder Output", true, true, message);    pane.OutputString(message + "\n--");    pane.Activate();        // Activates th new pane to show the output we just add.

}
 

private IVsOutputWindowPane GetOutputPane(Guid paneGuid, string title, bool visible, bool clearWithSolution, string message)

{

    IVsOutputWindow output = (IVsOutputWindow)ServiceProvider.GetService(typeof(SVsOutputWindow));    IVsOutputWindowPane pane;     output.CreatePane(ref paneGuid, title, Convert.ToInt32(visible), Convert.ToInt32(clearWithSolution));     output.GetPane(ref paneGuid, out pane);    return pane;

}

 

We need also to generate new guid and assign it to a variable at the beginning of our file:

namespace CustomBuilder

{

  internal sealed class CustomBuildCommand

  {

  //...    public static readonly Guid CommandSet = new Guid("84a7d8e5-400d-40d4-8d92-290975ef8117");  //...

  }
}

Distribution

After you’re done with the development of your extension and you’re sure it works fine (double check!) you can share the extension easily with other developers. First you have to open source.extension.vsixmanifest and specify some information about your extension. You should fill in all metadata information, target versions of Visual Studio, dependencies and other known information.

Figure 10

There are two supported ways to distribute your extension. First you can just share the *.vsix binary file of your Visual Studio Extension – e.g. send it by email, send a link to ftp/cloud or distribute as you wish. You can find the file in the bin/Release folder of the solution (if you built a release version of your extension). All the recipients will have to do is to download the file, close Visual Studio, double click on the file and use the installation wizard which is straight forward.

Visual Studio Gallery

If you want to reach a larger number of Visual Studio users, you can also publish the extension on the Visual Studio Gallery. To do so requires a few steps:

Sign in to Visual Studio Gallery website with your Microsoft account. Choose “Upload” option on the screen and create an MSDN Profile if you don’t have any. Specify your Display Name, agree to the Terms of Use and click the Continue button.

On the next few screens you have to input some information about your extension such as Extension type– whether it is a tool, control, template or storyboard shape (which applies only to PowerPoint, so not here).

After you have specified the type of your plugin, you can choose where you will store the *.vsix file. You can upload it to Microsoft servers or share a link to some custom location in the Internet (e.g. if you have your own server).

After you have uploaded (or linked to) the proper extension, you can add Basic information. Some of this is filled in automatically based on our source.extension.vsixmanifest from the project, like the title, version or summary. You can choose the category and add some tags to help users to find your extension easily. In the Cost category you can specify whether you want to sell your extension (Trial or Paid option) or provide it for Free. A really nice feature included here is the option to share a link to your code repository if you want to let users to browse through your code.

You have to also provide more information about your plugin. In the Description field you can use given templates or create your own document about your extension. Videos and images can be included so you can fully present your plugin.

Later you have to agree to the Contribution Agreement by checking the box and click Create contribution button.

After saving process your extension is not published yet, but you can Edit, Delete, Translate or Publish your project to finally make it live.

Now it’s easy to find your plugin in the Visual Studio Gallery website or in Extension and Updates in your Visual Studio by typing in the name or keywords which were specified while uploading your project.

Summary

The extensibility feature in Visual Studio 2015 is very easy and straight forward, as you can see. You create a project as you’d create a simple console application in Visual Studio, and you can share it too, easily extending the functionality of your IDE.

 

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.


Code faster with less bugs

$
0
0

Feeling anxious about your code? Does it seem as if your boss is always waiting around on you, breathing down your neck? Are you worried your colleagues are wondering why you’re taking so long? Well don’t worry, you’re not the only one!

Any developer worth his or her salt will have gone through the exact same thing. Coding is a creative endeavor; you’re not on a production line and your job isn’t about producing the same thing again and again. Just like music, writing or acting, there’s often not a ‘right’ way of getting to a functional end product, although there are a lot of ways of doing it better – and faster.

Of course, if you’re writing quality, well tested and unbuggy, easy to maintain code, you’re actually going to save yourself and your colleagues a lot of effort down the line. However, even if you’re doing all the best groundwork, the pressure to do more, faster is always going to be there.

No one likes to feel as if they’re holding the team back – it can feel embarrassing and frustrating. Fortunately, there are quite a few simple, pragmatic steps you can take to code faster and with less bugs. Reading this post shows you’re taking a positive approach to your work, so cut yourself some slack, you’re on the right path!

We spoke to our highly experienced team of developers here at Infragistics to see what advice they’d give.

1. Learn from more experienced developers

As with many problems in life, you’re probably not the first person to have had this issue. That should feel like a relief - knowing other people have struggled with coding fast and have gotten better shows you can too. Many of our devs told us they’d really benefited from shadowing more experienced programmers themselves. Working on a project with a seasoned pro will help you pick up a lot of tricks of the trade. You can see how they approach a problem, what code they reuse and test bugs.

Taking this same approach online, and sites like StackOverFlow are immense resources where intelligently asked questions are met with thoughtful answers.

2. Are you doing unnecessary work?

As we stated, coding is creative, with many ways to solve many problems. But for common tasks and issues it is often the case that someone has solved the issue before (and potentially more elegantly). Again the web is the developers’ friend. Sites like C# Design Patterns offer good solutions to common problems. More general patterns are offered by sites like TutsPlus that solving common programing problems.

Building on this theme every developer can code faster with less bugs by using code libraries. These are collections of prebuilt code that perform set functions. Using libraries in your code is even better than using patterns, as the code is right there. JavaScript is a great example of a language that has many excellent libraries that fulfill many useful purposes.

3. Don’t code, plan

That’s right, if you want to code faster with fewer bugs, stop coding. Using libraries and patterns like those described above is one route. Another is to stop coding altogether and plan. You can cut the development time of app by building with prototyping tools. Indigo Studio, our UX prototyping tool saves developers a lot of time by helping them build a working prototype of their app without writing a single line of code. You get from idea to finished product in way less time.

4. Don’t replicate your code across platforms

When you’re building an app for multiple operating systems, our developers recommend a platform like Xamarin. Xamarin speeds up your coding time by letting you build your app one time in C# but deploy it rapidly across iOS and Android. You save a lot of time and energy replicating your app before deploying to stores, compared to writing (and supporting) fresh code for each platform.

5. Objectively measure how you spend your time

Our team also recommended you start measuring your own productivity. This is a little like doing your own experiment; spend a week with pen and paper by your desk and simply track how much time you spend on different jobs throughout the day. At the end of the week you’ll have a quite a clear diary of how you actually approach the working week – you might be shocked by the amount of time you spend off-task. Replying to emails, attending meetings or whatever. You may also realize you’re not actually coding slowly but are instead spending far too much time on some unnecessary task. You can then identify your weak points and work on reducing time lost in these areas.

If you take some of these steps and implement them in your day to day working practices, you should start to notice gradual improvements. We’re not promising instant miracles, but taking a pragmatic approach will help you to minimize your weaknesses and accentuate your strengths. Good luck!


Bring high volumes of complex information to life with Infragistics WPF powerful data visualization capabilities! Download free trial or contact us today.

 

Building a Real time application with SignalR – Part 2

$
0
0

 

 

This post is in continuation of my previous post where we discussed the needs, basics, configurations and transport mechanism of SignalR. In this post we will take this a step further by creating a sample application and analyze it further. Broadly, this post can be divided into two major parts: In the first part, we will be creating a sample and in the other, we will see how SignalR works in various environments and look into the communication request between client and server. To get a bit of background, check out Part 1 of this series here.

 

Working on the Sample

Server monitoring is one of the important tasks that we perform several times in various scenarios. One simple way to do this is via remote login to the server to monitor - or you can also use a client which connects the server, pulls out the required performance counters and displays that information accordingly.

In this example we are going to create a Server monitoring application which will show server resources utilization in real time. Our sample will be a basic one as our goal is to explain SignalR, and we will be reading some basic counters of the server and displaying that info on the UI. The UI will be updated every second, which is implemented via a timer on the server that pushes the data to the client every second.

We have already seen the required configuration in our previous post, so I’ve created an empty ASP.NET application and completed following steps:

        Installed SignalR Nuget package

        Added Startup class and added Configuration method

        Added an HTML file as Index.HTML and added jQuery, jQuery SignalR and Hub proxy.

The core part of any SignalR application is the HUB. There are two key things we will be doing here. First, reading the server resource counter, and second, pushing this information to the connected clients after a certain interval (in this example, 1 second). So let’s see the hub implementation:

        [HubMethodName("sendSeverDetails")]

 

        publicvoid SendSeverDetails()

        {

            string processorTime;

 

            string memoryUsage;

 

            string diskReadperSec;

 

            string diskWriteperSec;

 

            string diskTransferperSec;

 

            // Getting the server counters

 

            GetServerUsageDetails(out processorTime, out memoryUsage, out

                diskReadperSec, out diskWriteperSec, out diskTransferperSec);

 

            // Broadcasting the counters to the connected clients

            Clients.All.SendSeverDetails(processorTime, memoryUsage,

                diskReadperSec, diskWriteperSec, diskTransferperSec, DateTime.Now.ToString());

        }

This is the core hub method. Here we are getting the server’s counters and then calling the client call back function and passing all the parameters. In this example, we are passing different parameters for each counter value, but we can also create an instance and pass a JSON string. As we need to keep updating the client with the latest counter, we need to call this method at certain intervals (here let’s say 1 second). This can be achieved via a timer as such:

       static Timer myTimer;

 

       privatereadonlyTimeSpan _updateInterval = TimeSpan.FromMilliseconds(1000);

 

       public ServerDetailsHub()

        {

            myTimer = new System.Threading.Timer(UpdateUI, null, _updateInterval, _updateInterval);

           

            // Rest of the code removed for brevity. Download the solution for complete code

        }

// This is called via Timer after certain interval which inturn calls the core hub method

 

        privatevoid UpdateUI(object state)

        {

            SendSeverDetails();

        }

Now let’s see our client side, where we have defined the call back method.

$(function () {

 

    var num = 1;

 

    // Declare a proxy to reference the hub.

 

    var hubProxy = $.connection.serverDetailsHub;

 

    // Create a function that the hub can call to broadcast messages.

 

    hubProxy.client.sendSeverDetails = function (processorTime, memoryUsage, diskReadperSec, diskWriteperSec, diskTransferperSec, when) {

 

        $('#tblServerUsage tr').last().after("<tr><td>" + num++ + "</td><td>" + processorTime + "</td><td>" + memoryUsage + "</td><td>"

            + diskReadperSec + "</td><td>" + diskWriteperSec + "</td><td>" + diskTransferperSec + "</td><td>" + when + "</td></tr>");

                 

    };

 

    // Start the connection.

 

    $.connection.hub.start();

 

});

Note – you can download the complete code for this example here .

In the above code, there are three things we are doing. First, creating the hub proxy; second, we defined the call back method which is called from the server and takes the same number of parameters; and third - which is another very important step - starting the hub. These steps do the negotiation with the server and create a persistent connection with server. In this demo, I am just adding a new row whenever the new data is received from the server.

Analyzing the application

SignalR creates a proxy JavaScript at run time which is used to create the proxy instance and establish the connection from the server. It can be seen if we navigate the proxy URL as follows:

In my previous post, we discussed that SignalR is capable of using multiple transport mechanisms and based on the scenario, it chooses one of the best options. So let’s see how the negotiation happens before selecting one.

The above traffic is captured while running the sample on IE 11. After downloading the required scripts, it downloads the hubs (which is the proxy we discussed above). Then you’ll see the red encircled area where it sends the negotiation request to the server. Based on that in the next request the webSockets transport gets selected. There are some other data with the request as well, like connection token, which is unique per client.

Let’s observe in the same application in a different environment:

These details were captured in Chrome, and here we see that apart from the common requests, it sends the negotiate request and chooses serverSentEevents as the transport and starts the request using the selected transport. Let’s see one more scenario:

Via IE9, we got three requests similar to those above, except the transport selected was foreverFrame, which starts the connection.

We see that based on the negotiation request, SignalR chooses one of the best options - and except for Web Socket, it requires one more request to start the connection.

Limiting the Transport Protocols

SignalR is very flexible and allows many configuration options based on need. We can configure to a specific transport or we can even provide the fallback in a specific order. This also helps in reducing the initial timeframe to start the connection as SignalR already knows which transport protocol to be used. We can provide any specific transport while starting the hub because it is the function which decides the selection of Protocol.

$.connection.hub.start({ transport: 'foreverFrame' });

We provide also the fallback options as

$.connection.hub.start({ transport: ['foreverFrame', 'longPolling'] });

Note –Similar to $.connection.hub.start(), the proxy also provides another function to stop the persistent connection as $.connection.hub.stop() and once it is called, we need to start the hub again to continue the communication between client and server.

Conclusion

In this post, we have created a server monitor sample, where the server pushes the server usage counter details at a certain interval to all the connected clients. We used a timer which after a certain interval raises an event, which first collects the counters and broadcasts to the connected client.

We also looked into the developer tools to examine various transport protocols used in various scenarios and saw that the same application uses different protocols based on the negotiations. We also saw how to narrow down the transport or even provide the specific protocol which reduces initial overhead.

I hope you enjoyed this post, and thanks for reading!

 

Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics jQuery controls.

From ASP.NET WebForms to modern web technologies

$
0
0

You may have come across articles about the end of support for ASP.NET WebForms and how you should consider to start using ASP.NET MVC over ASP.NET WebForms. This topic is about components from each of the ASP.NET framework programming models, or to be more concrete, why you would consider using Ignite UI based grid widgets over the Aikido based grid controls.

Before we go further, I wanted to let you know that we’re not comparing the two Microsoft’s web application framework models. Below you will read about the main differences between them, but keep in mind that each can be “the best choice” for a particular solution depending on the requirements of the application and the skills of the team members involved. You can build great apps with either and bad apps with either.

How Aikido grids work?

Like all WebForms controls, our Aikido Grids are server-based controls. All of the core features like Cell/Row editing, Sorting, Filtering, Virtual Scrolling, Paging and Active cell changing (Activation) require postback to the server to be performed in order to sync state between the client and server, and to retrieve (operate with) the data which will be rendered in the grid. While we have used everything at our disposal to guarantee outstanding performance for our ASP.NET Grids, the sole requirement of constant postbacks and maintaining the entire control’s state between client and server, can easily become a bandwidth or performance problem with very complex forms.

The goal of this topic is not to drive you away from the Aikido grids, instead we want to show you another perspective on how to implement, present and manipulate tabular data, with modern web technologies.

Why choose the Ignite UI grids?

With Ignite UI you can create new generation, modern, client framework based on jQuery UI. Its whole lifecycle is on the client-side, which makes it independent from server-side technology, and also is very important when comes to large and demanding apps. Ignite UI grids are built with performance as a core feature and all its features makes it super simple to use and maintain.

Let me highlight some of them:

  • Support binding to various types of data sources including JSON, XML, HTML tables, WebAPI/RESTful services, JSONP, Arrays and OData combined (example)
  • Adds features like local and remote sorting and filtering, codeless.
  • Column hiding, resizing, summaries, fixing, grouping, templating, multi-column headers, sorting, unbound columns, or with one word Column Management Features (example)
  • Easy to use selection features (example)
  • Multiple templating engine integrations (example)
  • Cell Merging (example)
  • Easy export to excel (example)
  • Responsive Web Design mode (RWD) (example)
  • js support (example)
  • Angular JS support (example)
  • Virtualization (fixed and continuous) (example)
  • Append Rows on Demand feature (example)
  • Displays data in a tree-like tabular structure, not only in hierarchical data with multiple levels and layouts (example)

 

While using any of the Ignite UI grids, alone in html page or in MVC project, you will notice the full control over the HTML, RESTful services, routing features, extensible and maintainable project architecture, reduced page size, parallel development support and extensibility.

You should also keep in mind that ASP.NET Web Forms is not going to be part of ASP.NET 5 (ASP.NET Core 1.0). You will be able to continue build Web Forms apps in VS2015 by targeting the .NET 4.6 framework, however, Web Forms apps cannot take advantage of any of the new features of ASP.NET 5. None of this is certain, though.

Currently, one of the certain things is that the jQuery JavaScript library has become one of the most widely-used JavaScript libraries. jQuery’s browser abstraction layer along with the built-in DOM query engine make the library a solid platform for constructing UI components and controls. Built on top of the jQuery core, the jQuery UI library provides widgets and interactions for designing highly interactive and responsive web user interfaces.

References:

MVC vs. WebForms, A Clear Loser Emerging

Using IgniteUI and ASP.NET MVC

How to Keep Your Field and Office Teams Organized and Productive?

$
0
0

For many years, organizations have struggled to manage the challenges associated with geographically dispersed teams. How can your company stay focused and united when teams around the country or even across the world rarely work face to face?

There have been a whole range of solutions and technologies intended to overcome this challenge. In the past, geographically spread organizations depended on the post, fax, newsletters and telephone calls.

Things have gotten a whole lot easier since the widespread adoption of the Internet however, and this has resulted in a growth in teleworking in the US:

Fig 1. Total U.S. Teleworkers

Source: Global Workplace Analytics 2015

This is good news for teleworkers and your business. Recent research shows teleworkers are not only more productive, but they’re more likely to work even when they’re sick. However, telework usually involves employees working in their homes, but what about field workers?

Field workers are, by definition, away from their desks and, typically, dependable on an Internet connection. If you’ve ever worked in a team where field workers and office workers are in different locations, you’ll appreciate this can lead to quite a lot of misunderstandings, conflict and even distrust.

Why the head office/field work dynamic is so tricky

There are a number of challenges you need to bear in mind when working in a team with colleagues in a central office managing or working in conjunction with those in the field:

  • Resentment: field workers may feel they are doing ‘real work’, while the ‘paper pushers’ in head office order them around
  • Distrust: the two teams work in very different ways and as a consequence misunderstand how the other works
  • Misunderstandings: field workers and office workers are rarely in the same place at the same time, and this means they ‘cross wires’, misunderstanding what the other team is doing and what is expected of them and what the vision for a piece of work is
  • Lack of communications: field workers by definition, often don’t have instant access to email, or even telephone connections. As a result the two teams can struggle to stay up to date

The head office to field work dynamic can be really tough. However, there is a number of things you can do to alleviate these issues and make the relationship flourish.

1. Promote a team identity

Researchers at Stanford have shown how teams who feel that they are working to solve a problem together – rather than simply doing their own separate tasks – are much more motivated. So, how can this be achieved when you’re not in the same room – or even the same country?

Above all, your colleagues need to be kept in touch. When they can’t be in the same physical place as one another, providing access to enterprise social tools like Yammer or Office 365 Groups means individuals feel they’re part of a group working towards a larger goal. Giving field workers access to such software is therefore key.

2. An Intranet

Intranets might not be the sexiest tools in the world, but they’re still amazingly powerful. Providing common access to a SharePoint library for both teams via a mobile phone or tablet app means they can stay up to date with changes to documents and avoid misunderstandings.

3. Conference calls and webcams

With teams working far apart from one another, it can be hard to build a sense of trust and common purpose. While it’s tempting to simply email colleagues in the field, this will only lead to long, frustrating email threads.

If you arm fieldworkers with mobile devices that allow them to carry out video calls with colleagues, this simple face to face contact can create far more trust and facilitate understanding – and achieve far more than a long email chain.

4. A team charter

Management website Mind Tools recommends creating a team charter, so disparate teams have a common purpose. A team charter allows everyone to agree to a set of tasks, roles and responsibilities, and ensures everyone is ‘on the same page’ as to how they should behave and the goals they need to achieve.

5. Facilitate work anytime, anywhere

By its nature, field work does sometimes mean employees are without an Internet connection. This should not hinder their ability to be productive however. Ensuring they can sync files to their devices and work on these when they need is crucial. Whether it’s blueprints, a sales document or architectural plans, field workers need to be able to edit documents on the go and share these with office-bound colleagues once Internet connection is restored.

The mobile workforce

With field and office workers often struggling to ‘stay on the same page’, it’s crucial that you facilitate this using best practices and technology.

SharePlus allows field workers to connect to your company’s SharePoint and Office 365 environments, communicate with colleagues and collaborate on documents from multiple file sources on premise or in the Cloud while on the go. Even when there is no Internet connection, field workers can edit documents before syncing their changes to SharePoint later on.

To find out more about how SharePlus could configure to your field workers and their needs, contact us today – we’d love to hear from you. 

Windows File Share (Network Drives) Now Accessible on Mobile with SharePlus

$
0
0

Shared Drives across a network are an essential, secure way of storing and sharing  documents; it is the preferred method of collaboration by businesses, colleges and government agencies. Unlike Cloud services, where there is usually one account per individual, there’s no limit on the number of Shared Drives you or your colleagues might have.

However, there are some limits. Businesses usually restrict the access to Network Drives; in some cases, users are discouraged from storing files outside their internal Drives. This might be convenient security-wise, but it poses a tremendous challenge to both mobility and team collaboration. Online guides with instructions on how to access your Drive from a mobile device never involve less than 7 or 8 steps, and none of those steps are easy breezy. Once again, SharePlus simplifies the process for you.

Access your Network Drive in less than three steps!

Accessing your Network Drive is an easy two-step process and doesn’t involve any tweaking. Simply go to the Documents tab in SharePlus, tap “+” and enter your details.

Take advantage of mobilizing your Shared Drives

Avoid having all of your data stuck in one single place with restricted access. Here are a few reasons why you should mobilize your Network Drives with SharePlus:

  • It allows for centralized access to all storages. Aside from your Shared Drive, you can also access all your other storage services (including Dropbox, Google Drive, OneDrive for Business and, of course, SharePoint).
  • Importing files is easy and quick. Once your Network Drive has been configured, you can import files from your device’s Local Files and Photo Albums. You can also create new files using Audio Recordings and your Camera.

Team collaboration made easy by SharePlus

It has become common practice for businesses to require that their users save their information in an individual Shared Drive. Some of them will have a small C:\ Drive that allows temporary saving but gets wiped daily. With no access to a truly shared service, sharing data between co-workers can become a grueling task. SharePlus not only lets you configure multiple Shared Drives, but it also lets you edit in real time. When using the built-in iOS editors, you will be able to update your Network Drives with the new, edited document.

In iOS, there is also a built-in PDF annotator that helps you include useful notes or symbols for later reference.

Time to take control: one mobile place for all Drives

Delivering a truly mobile experience, SharePlus helps you work the way you want, keeping you productive and in-sync across your mobile devices.

Interested in mobilizing your Network Drives? Get your Free 30-Day SharePlus Enterprise Demo here!

If you have any questions, would like to give feedback, or schedule a demo, please do not hesitate to contact us. We look forward to hearing from you!

SharePlus Meets Dropbox – Seamless and Unified Mobile Access to Your Information

$
0
0

In addition to SharePoint, on-premises or in the cloud, SharePlus now allows you to access additional storage services, like Dropbox, OneDrive for Business, and Google Drive from your mobile device. Plus, you can access Network Drives (Windows File Share) shared by your colleagues.

Dropbox, one of the most popular cloud storage services is now supported within SharePlus. Why is Dropbox a big deal? There are many reasons, so let’s just mention two: simplicity and a device-agnostic approach. Sounds familiar, right? SharePlus shares the same principles for the sake of good and consistent user experience: SharePlus iOS users find themselves comfortable and familiar with the Android app and vice-versa.

Dropbox Business & SharePlus Enterprise: it’s a match!

With Dropbox Business, Enterprises can take advantage of secure file sharing, administration features, and some collaboration features. Any company taking advantage of Dropbox and using SharePoint will definitely find the integration between SharePlus and Dropbox useful. SharePlus Enterprise provides premium features required by power users, project managers, and IT administrators, successfully meeting the needs of small and large companies alike.

Your Dropbox content now in SharePlus

With SharePlus for iOS’ native previewer, you can take a quick look at any type of file from Dropbox, including Microsoft Office documents, PDF files, and images. When working with PDF files, SharePlus for iOS provides a PDF annotation suite that lets you annotate PDF documents and fill in PDF forms within the app.

When it comes to document, PDF, image or other editing, SharePlus’ smooth integration with third-party apps gives you total flexibility. In the iOS platform, any app that supports incoming and outgoing Open In will do the trick. Just pick the editing app you are most comfortable with or the one allowed by your Enterprise security policies. SharePlus for Enterprise adds many security layers, including app-specific restrictions and feature trimming.

Android users share the same flexibility: once you tap over a file, you can choose to edit it with the help of your preferred editing app on your Android device.

Productivity is not just about doing more

It is about creating more impact with less work. It has always paid to be productive, but nowadays it does more than ever. SharePlus allows you to achieve more with its Dropbox Business integration. You can take advantage of enterprise security and administration features while continuing to work with all your files at your convenience. With SharePlus you can not only mobilize your SharePoint investment, but also enable teams to access content from multiple file sources on premise or in the Cloud.

If you’re interested in learning more about how SharePlus can work with your Dropbox Business account, try out the Free 30-Day SharePlus Enteprise Demo here!

Why Work in Technology? - An Infographic

$
0
0

For several decades now, working "in technology" has been something of a buzzword. It's cool, trendy, and you'll make enough money to last a lifetime, if the rumors are to be believed. But what's true and what isn't? What's the reality of working in technology in 2016? Check out the infographic below to find out!


Share With The Code Below!

<a hhttp://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/5557.Why-Work-In-Technology.jpg"/> </a><br /><br /><br />Why Work in Tech? <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Developer News - What's IN with the Infragistics Community? (3/7-3/20)

$
0
0

This edition of Developer News is chock full of not only things to learn, but WHY you should learn them. If you're considering a new skill but not sure what, or why, check out out one of these articles, selected by the Infragistics Community!!

6. 6 Reasons Web Developers Need to Learn JavaScript ES6 Now (TNW)

5. 9 Skills Every Javascript Developer Should Possess ()

4. Top Programming Languages that will Future Proof Your Portfolio (Information Week)

3. The Six Hottest New Jobs in IT (IT World)

2. These are the 10 Most Demanded Roles for Computer Science Graduates (TechJuice)

1. 24 Best Web Design and Development Tools 2016 (DZine)

24 Best Web Design and Development Tools 2016

SharePlus Now Supports Google Drive

$
0
0

Despite all the hoopla and brouhaha surrounding cloud services, the truth is that storing data in cloud services generally means saving information online using backup services. And while cloud services are very useful, businesses are aware of the perils of having documents strewn across different cloud providers. Every cloud user has experienced at least one fast-paced, panic-driven search for a document through multiple systems. And with the proliferation of cloud storages and enterprise software, business users need a mobile application with centralized access to all cloud services they use for work. Once again, SharePlus from Infragistics responds to this need.

Now integrated with Google Drive, the new SharePlus version allows you to access your documents and work without limits from anywhere. All of your work available on your mobile device, ready for you to share and collaborate.

Why you should use Google Drive in SharePlus

SharePlus is a unique platform which lets you mobilize your SharePoint investment and enable teams to access content from multiple file sources on premise or in the Cloud on their devices. While it previously only offered integration with Microsoft SharePoint, the stakes have been raised by expanding its integrations to include Google Drive, Dropbox, Network Drives and OneDrive for Business

SharePlus offers a native previewer to display your documents, and also relies on 3rd party applications to edit them. Google’s Docs, Sheets and Slides offer the perfect opportunity for team collaboration using SharePlus’ “Open In” functionality.

Changes to documents will be automatically synced when using Google Apps via SharePlus

You simply need to add Google Drive as a content source inside SharePlus, and you’ll have your SharePoint and Google Drive files in a centralized mobile space! When editing your Google Drive documents inside SharePlus, your experience is enhanced by the auto-saving and auto-syncing capabilities of the Google apps which send your updates directly into Google Drive (including updates to your Starred documents). In case of any connectivity issues, your Docs, Sheets and Slides can be synchronized when you are able to join a working network.

You will need to have the Google apps (Google Docs, Sheets, and Slides) installed on your device to make sure changes are being updated automatically. If you choose to edit your Google Drive files with the help of other apps, like the Microsoft Office suite, your changes will not be reflected in Google Drive or SharePlus.

Google Drive documents can be easily moved to other file locations in SharePlus

No more scattered documents! With the new version of SharePlus, you will be able to move Google Drive files to your SharePoint portals, Dropbox, OneDrive for Business, or ultimately any Network Drives shared with you. You choose how to organize your information fast and easy.

Interested in trying this out?

Get your Free 30-Day SharePlus Enterprise Demo here!

If you have any questions, would like to give feedback, or schedule a demo, please do not hesitate to contact us. We look forward to hearing from you!


New Solutions to Old JavaScript Problems: 1) Variable Scope

$
0
0

Introduction

I love JavaScript but I'm also well aware that, as a programming language, it's far from perfect. Two excellent books, Douglas Crockford's BLOCKED SCRIPT The Good Parts and David Herman's Effective JavaScript, have helped me a lot with understanding and finding workarounds for some of the weirdest behavior. But Crockford's book is now over seven years old, a very long time in the world of web development. ECMAScript 6 (aka ES6, ECMAScript2015 and several other things), the latest JavaScript standard, offers some new features that allow for simpler solutions to these old problems. I intend to illustrate a few of these features, examples of the problems they solve, and their limitations in this and subsequent articles.

Almost all the new solutions covered in this series also involve new JavaScript syntax, not just additional methods that can be pollyfilled. Because of this, if you want to use them today and have your JavaScript code work across a wide range of browsers then your only real option is to use a transpiler like Babel or Traceur to convert your ES6 to valid ES5. If you're already using a task runner like Gulp or Grunt this may not be too big a deal, but if you only want to write a short script to perform a simple task it may be easier to use old syntax and solutions. Browsers are evolving fast and you can check out which browsers support which new features here. If you're just interested in playing around with new features and experimenting to see how they work, all the code used in this series will work in Chrome Canary.

In this first article I am going to look at variable scope and the new let and const keywords.

The Problem with var

Probably one of the most common sources of bugs (at least it's one I trip over on regularly) in JavaScript is due to the fact that variables declared using the var keyword have global scope or function scope and not block scope. To illustrate why this may be a problem, let's first look at a very basic C++ program:

#include <string>
#include <iostream>

using std::cout;
using std::endl;
using std::string;

int main(){
   string myVariable = "global";
   cout << "1) myVariable is " << myVariable << endl;
   {
      string myVariable = "local";
      cout << "2) myVariable is " << myVariable << endl;
   }
   cout << "3) myVariable is " << myVariable << endl;

   return 0;
}

Compile that program and execute it and it prints out the following:

1) myVariable is global
2) myVariable is local
3) myVariable is global

If you come to JavaScript from C++ (or a similar language that also has block scoping) then you might reasonably expect the code that follows to print out the same message.

var myVariable = "global";
console.log("1) myVariable is " + myVariable);
{
   var myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

What it actually prints out is:

1) myVariable is global
2) myVariable is local
3) myVariable is local

The problem is that re-declaring myVariable inside the braces does nothing to change the scope, which remains global throughout. And this isn't just a problem related to braces that are unattached to any flow-control keywords. For example, you'll get the same result with the following minor change:

var myVariable = "global";
console.log("1) myVariable is " + myVariable);
if(true){
   var myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

The lack of block scoping can also lead to bugs and confusion when implementing callback functions for events. Because the callback functions are not invoked immediately these bugs can be particularly hard to locate. Suppose you have a set of five buttons inside a form.

<form id="my-form"><button type="button">Button 1</button><button type="button">Button 2</button><button type="button">Button 3</button><button type="button">Button 4</button><button type="button">Button 5</button></form>

You might think the following code would make it so that clicking any of the buttons would bring up an annoying alert dialog box telling you which button number you pressed:

var buttons = document.querySelectorAll("#my-form button");

for(var i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

In fact, clicking any of the five buttons will bring up an annoying alert dialog box telling you that the button claims to be the mythical button 6.

The issue is that the scope of i is not limited to the (for) block and each callback thinks i has the same value, the value it had when the for loop was terminated.

One solution to this problem is to use an immediately invoked function expression (IIFE) to create a closure, in which the current loop index value is stored, for each iteration of the loop:

for(var i=0, n=buttons.length; i<n; i++){
   (function(index){
      buttons[index].addEventListener("click", function(evt){
         alert("Hi! I'm button " + (index+1));	
      }, false);
   })(i);
}

let and const

ES6 offers a much more elegant solution to the for-loop problem above. Simply swap var for the new let keyword.

for(let i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

Variables declared using the let keyword are block-scoped and behave much more like variables in languages like C, C++ and Java. Outside of the for loop i doesn't exist, while inside each iteration of the loop there is a fresh binding: the value of i inside each function instance reflects the value from the iteration of the loop in which it was declared, regardless of when it is actually called.

Using let works with the original problem too. The code

let myVariable = "global";
console.log("1) myVariable is " + myVariable);
{
   let myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

does indeed give the output

1) myVariable is global
2) myVariable is local
3) myVariable is global

Alongside let, ES6 also introduces const. Like let, const has block scope but the declaration leads to the creation of a "read-only reference to a value". You can't change the value from 7 to 8 or from "Hello" to "Goodbye" or from a Boolean to an array. Consequently, the following throws a TypeError:

for(const i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

It's important (and perhaps confusing) to note that declaring an object with the const keyword does not make it immutable. You can still change the data stored in an object or array declared with const, you just can't reassign the identifier to some other entity. If you want an object or array that is immutable you need to use Object.freeze (introduced in ES5).

Solve Small Problems

$
0
0

It's fun to think of great moments in the history of science, particularly the ones that have a memorable anecdote attached to them.  In the 3rd century BC, a naked Archimedes ran down a city street, screaming Eureka, because he had discovered, in a flash, how to measure the volume of irregular solids.  In the 1600s, a fateful apple bonks Issac Newton on the head, causing him to spit out the Theory of Gravity.  In the early 1900s, another physicist is sitting around, contemplating the universe, when out pops E=MC^2.

These stories all share two common threads: they're extremely compelling and entirely apocryphal.  As such, they make for great Disney movies, but not such great documentaries.  Point being, we as humans like stories of "eureka moments" and lightning bolt inspiration much better than tales of preparation, steady work, and getting it right on attempt number 2,944, following 2,943 failed attempts.

But it goes beyond just appreciating the former type of story.  We actually manufacture them.  Perhaps the most famous sort of example was Steve Jobs' legendarily coy, "oh yeah, there's one more thing" that preceded the unveiling of some new product or service.  Jobs and Apple were masters of "rabbit from the hat" marketing where they'd reveal some product kept heretofore under wraps as though it were a state secret.  All that is done to create the magic of the grand reveal -- the illusion that a solution to some problem just *poof* appeared out of thin air.

Unrealistic Expectations

With all of this cultural momentum behind the idea, it's easy for us to internalize it.  It's easy for us to look at these folk stories of scientific and product advancement and to assume that not having ideas or pieces of software fall from us, fully formed and intact, constitutes failure.  What's wrong with us?  Why can't we just write that endpoint in one shot, taking into account security, proper API design, backward compatibility, etc?  We're professionals, right?  How can we be failing at this?

You might think that the worst outcome here is the 'failure' and the surrounding feelings of insecurity.  But I would argue that this isn't the case at all.  Just as the popular stories of those historical scientists are not realistic, and just as Apple didn't wave a magic wand and wink an iPod Nano into existence, no programmer thinks everything through, codes it up, and gets it all right and bulletproof from the beginning.  It simply doesn't happen.

Paralysis By Analysis

As such, the worst problem here isn't the 'failure' because there is no failure.  Not really.  The worst problem here is the paralysis by analysis that you tend to face when you're caught in the throes of this mindset.  You're worried that you'll forget something, make a misstep, or head in the wrong direction.  So instead, you sit.  And think.  And go over your options endlessly, not actually doing anything.  That's the problem.

I'm sure there is no shortage of articles that you might find on the internet, suggesting 6 fixes for paralysis by analysis.  I won't treat you to a similar listicle.  Instead, I'll offer one fix.  Solve small problems.

Progress Through Scaling Down

Building a service endpoint with security, scale, backward compatibility, etc, is a bunch of problems, and none of them is especially small.  You know what is a small problem?  (Assuming you're using Visual Studio)  Creating a Visual Studio solution for your project is a small problem.  Adding a Web API project (or whatever) to that solution is a small problem.  Adding a single controller method and routing a GET request to it is a small problem.  Getting into that method in the debugger at runtime is a small problem.  Returning a JSON "Hello World" is a small problem.

If you assembled these into a list, you could imagine a conceptual check mark next to each one.

  • Make solution file.
  • Add project.
  • Add a controller.
  • Hit controller at runtime with GET request.
  • Return JSON from controller.

There's no worrying about aspects, authentication, prior versions, or anything else.  There's only a list of building blocks that are fairly easy to execute, fairly easy to verify, and definitely needed for your project.

The idea here is to get moving -- to build momentum without worry and to move ahead.  There are two key considerations with this approach, and they balance one another.  They're also ranked in order of importance.

  1. Don't let yourself get stuck -- pick a small, needed problem to solve, and solve it.
  2. Do your best to solve your problems in a non-limiting way.

The first is paramount because you're collecting a paycheck to do things, not to do nothing.  Or, to be less flip about it, progress has to come.  The second item is important, but secondary.  Do you best to make progress in ways that won't bite you later.

For an example of the interplay here, consider one of the aspects that I've been mentioning -- say security.  At no point during the series of problems on the current to-do list is security mentioned.  It's not your problem right now -- you'll address it later, when the first step toward security becomes your problem on your list.  But, that doesn't mean that you should do something obtuse in solving your problems or that you should do something that you know will make it harder to implement security later.

For me, various flavors of test-driven development (TDD) are the mechanism by which I accomplish this.  I certainly recommend giving this a shot, but it's not, by any stretch, the only way to do it.  As long as you've always got a way to keep a small achievable task in front of you and to keep from shooting yourself in the foot, your method should work.

The key is to keep moving through your checklist, crossing off items, and earning small wins.  You do this by conceiving of and solving small problems.  If you do this, you may never become Disney Archimedes or Einstein, but you won't have to pull a Steve Jobs magic act during your next performance review to secure a raise against all odds.  You'll already have it in the bag.

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.

Ignite UI Release Notes - March 2016: 15.1, 15.2 Service Release

$
0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last service release for Ignite UI 2015 Volume 1.

Download the Release Notes

Ignite UI 2015 Volume 1

Ignite UI 2015 Volume 2

Infragistics ASP.NET Release Notes - March 2016: 15.1, 15.2 Service Release

$
0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last service release for ASP.NET 2015 Volume 1.

Download the Release Notes

ASP.NET 2015 Volume 1

ASP.NET 2015 Volume 2

What I Learned After Using an SSH Honeypot for 7 Days

$
0
0

How Did This Idea Come About?

The idea of using a honeypot to learn about potential attackers came to me while chatting with a friend that had exposed his SSH port to log into while away from home. He had mentioned that Chinese IPs had been attempting to gain access. These attacks reminded me of when broadband internet was introduced and there was quite a few firewall software apps protecting internet users. In these apps, when specific incoming traffic took place a popup dialog would alert you to a possible attack. Today's internet is a lot more advanced with several new attack vectors and running an SSH honeypot would be a great opportunity to get up to speed about the attacks and attackers affecting the internet.

What Honeypot to Use?

Honeypots are classified into 3 different categories:

  • Low Interaction - simulates services and vulnerabilities for collecting information and malware but doesn't present a usable system for the attacker to interact with
  • Medium Interaction - imitates a production service in a very controlled environment that allows some interaction from an attacker
  • High Interaction - imitates a production service where attackers can have a free for all until the system is restored

When chosing a honeypot I wanted something where I could not only see the attackers IP but what they are doing to a system. I also didn't want to expose a full blown system to the world where it could be used to hack my internal network or potentially be used for external attack. Going with this premise I chose Kippo. Kippo is the most popular medium interaction SSH honeypot designed to log brute force attacks, and most importantly, the entire shell interaction performed by the attacker.

How Did the Week Progress?

Within the first hour of exposing SSH port 22, I had login attempts taking place from all over the world. The more time that had passed had me thinking about the popularity of Kippo and given the fact it hasn't been updated in a while, it's most likely detectable and lacking simulated features used by attackers. A quick web search confirmed all these suspicions, so I replaced Kippo with Cowrie. Cowrie is directly based off of Kippo with several important updates that include:

  • SFTP and SCP support for file upload
  • Support for SSH exec commands
  • Logging of direct-tcp connection attempts (SSH proxying)
  • Logging in JSON format for easy processing in log management solutions
  • Many more additional commands and most importantly, a fix for the previous Kippo detection

The switch went real smooth as Cowrie is basically a drop in replacement for Kippo and scripts like Kippo-Graph work with it. In addition to switching to Cowrie, I had updated the included fs.pickle file to not be the out-of-the box file system you get by default.

As the week progressed the honeypot continued to rack up login attempts, a few successful, but most were not successful. Because of this I had added a few of the more common username/password combinations to hopefully entice attackers to interact with the honeypot.

What Did the Statistics Look Like?

Over the course of a week there was a total of 1465 login attempts, which resulted in 1374 failed attempts and 91 successful attempts. Additionally, all these attempts are a combination of 113 unique IP addresses.

Top 10 Usernames

Top 10 Usernames

Top 10 Passwords

Top 10 Passwords

Top 10 Combinations

Top 10 Combinations

Top Connections Per Country

Top Connections Per Country

(KR = South Korea, US = United States, RU = Russia, TN = Tunsia, VN = Vietnam, UA = Ukraine, CN = China, FR = France)

What Information Have I Learned?

The major take away from this experience is that most of the login attempts appear to be automated through use of some tool or by way of botnet. Evidence of this comes from the fact that login attempt sessions use origin identifying passwords and repeating passwords tried day after day. Additionally, by doing a CBL lookup of for example the top connection in my honeypot returns the following text.

CBL Lookup Text

Attackers are also targeting exposed Raspberry Pi's and Ubiquiti routers as shown in the statistics. Both of these devices have factory default logins that are easily taken advantage of when not updated.

Unfortunately, in the 7 days that this honeypot ran there was not any notable interactions. Typically the most common types of attacks once an attacker gets inside SSH are botnet connections, irc bouncers or anything that allows an attacker remote control and interaction.

Reference Links

The reference links below are related to this blog post. If you're interested in more information about other honeypots available and setting one up, learn more at this link.

SSH Honeypots: KippoCowrie

Scripts: Kippo-GraphKippo-Scripts

By Torrey Betts


Using BeautifulSoup to Scrape Websites

$
0
0

Introduction

Beautiful Soup is a powerful Python library for extracting data from XML and HTML files. It helps format & organize the confusing XML/HTML structure to present it with an easily traversed Python object. With only a few lines of code you can easily extract information from most websites or files. This blog post will barely scratch the surface of what's possible with BeautifulSoup, be sure to visit the reference links at the bottom of this post to learn more.

Installing BeautifulSoup

If you're using a Debian based distribution of Linux, BeautifulSoup can be installed by executing the following command.

$ apt-get install python-bs4

If you're unable to use the Debian system package manager, you can install BeautifulSoup using easy_install or pip.

$ easy_install beautifulsoup4

$ pip install beautifulsoup4

If you can't install using any of the following methods it's possible to use the source tarball and install with setup.py.

$ python setup.py install

To learn more about installing or any possible errors that could occur, visit the BeautifulSoup site.

Your First Soup Object

The soup object is the most used object in the BeautifulSoup library as it will house the entire HTML/XML structure that you'll query information from. Creating this object requires 2 lines of code.

html = urlopen("http://www.infragistics.com")
soup = BeautifulSoup(html.read(), 'html.parser')

Taking this one step further, we'll use the soup object to print out the pages H1 tag.

from urllib import url open
from bs4 import BeautifulSoup

html = urlopen("http://www.infragistics.com")
soup = BeautifulSoup(html.read(), 'html.parser')
print soup.h1.get_text()

Outputs:

Experience Matters

Querying the Soup Object

BeautifulSoup has multiple ways to navigate or query the document structure.

  • find(tag, attributes, recursive, text, keywords)
  • findAll(tag, attributes, recursive, text, limit, keywords)
  • navigation using tags

find Method

This method looks through the document and retrieves the first single item that matches the provided filters. If the method can't find what you've search, None is returned. One example would be you want to search for the title of the page.

page_title = soup.find("title")

The page_title variable now contains the page title wrapped in it's title tag. Another example would be if you wanted to search the page for a specific tag id.

element_result = soup.find(id="theid")

The element_result variable now contains the HTML element that matched the query for id, "theid".

findAll Method

This method looks through the tag's descendants and retrieves all descendants that match the provided filters. If method can't find what you've searched for an empty list is returned. One example and simplest usage would be that you want to search for all hyperlinks on a page.

results = soup.findAll("a")

The variable results now contains a list of all hyperlinks found on the page. Another example might be you want to find all hyperlinks on a page, but they are using a specific class name.

results = soup.findAll("a", "highlighted")

The variable results now contains a list of all hyperlinks found on the page that reference the class name "highlighted". Searching for tags along with their id is very simliar and could be done in multiple ways, below I'll demonstrate 2 different ways.

results = soup.findAll("a", id="their")
results = soup.findAll(id="theid")

Navigation using Tags

To understand how navigation using tags would work, imagine that the HTML structure is mapped like a tree.

  • html
  • -> head
  • -> title
  • -> meta
  • -> link
  • -> script
  • body 
  • -> h1
  • -> div.content
  • and so on...

Using this reference along with a page's source if we wanted to print the page title, the code would look like this.

print soup.head.title

Outputs:

<title>Developer Controls and Design Tools - .Net Components & Controls</title> 

Scraping a Website

Using what was learned in previous section we're now going to apply that knowledge to scraping the definition from an Urban Dictionary page. The Python script looks for command line arguments that are comma separated to define. When scraping the definition from the page we use BeautifulSoup to search the page for a div tag that has the class name "meaning".

import sys, getopt
from urllib import url open
from bs4 import BeautifulSoup

def main(argv):
   words = []
   rootUrl = 'http://www.urbandictionary.com/define.php?term='
   usageText = sys.argv[0] + ' -w <word1>,<word2>,<word3>.....'

   try:
      if (len(argv) == 0):
         print usageText
         sys.exit(2)
      opts, args = getopt.getopt(argv, "w:v")
   except getopt.GetoptError:
      print usageText
      sys.exit(2)

   for opt, arg in opts:
      if opt == "-w":
         words = set(arg.split(","))

   for word in words:
      wordUrl = rootUrl + word
      html = urlopen(wordUrl)
      soup = BeautifulSoup(html.read(), 'html.parser')
      meaning = soup.findAll("div", "meaning")
      print word + " -- " + meaning[0].get_text().replace("\n", "")

if __name__ == "__main__":
   main(sys.argv[1:])

Outputs:

python urbandict.py -w programming
programming -- The art of turning caffeine into Error Messages.

References

The reference links below are related to this blog post. If you're interested in more information about using BeautifulSoup a great resource is the Web Scraping with Python book.

BeautifulSoup: Installing BeautifulSoupKinds of ObjectsfindfindAll

easy_install: Installing easy_install

pip: Installing pip

By Torrey Betts

Four considerations when building and deploying a responsive website

$
0
0

 

“90% of people move between devices to accomplish a goal, whether that’s on smartphones, PCs, tablets or TV”

That is according to the latest research from Google on the topic. This strongly suggests that the majority of web users are now consuming their digital media using mobile devices. This causes a massive problem for developers due to the vast amount of devices with varying screen resolutions. iPad, Kindle, iPhone and Blackberry… all of these devices need different approaches when crafting sites and those are only the market leaders.

The three most common ways that mobile users move between devices are:

  1. Repeating a search on another device
  2. Navigating directly to the destination site on the second device
  3. Sending themselves a link (email or Slack is great for this) to revisit at their convenience later

Imagine how hard it would be to create a version of your website for each of these devices, and that's without even considering the likelihood of screen sizes changing in the future. We need an approach which allows us to design for multiple devices, even if those devices haven’t been created yet.

We need responsive design!

Responsive design is an approach which helps us to provide our users with the optimum viewing experience regardless of the device they’re using. It’s a concept which lets the individual elements of a website - such as its text or images - to be adjusted in both size and layout by responding to the behavior of the user and the behavior of the device being used.

The other option is to build a dedicated mobile or tablet app. We will look at that approach in another post.

In essence, responsive design is not a single technology, but instead a set of practices which make your website more efficient for ease of use and maximum conversion. Using various techniques - such as fluid grids and some CSS media queries - a website can be made to respond automatically based on the visual preferences set the by the user's device.

In this post we will look at a number of considerations to be made when building and deploying a responsive website.

Understanding media queries

A media query can be used to allow you to target a specific set of CSS rules in your stylesheet for implementation on a particular device. This means that a media query can tell the elements on your site how to behave when they come across specific devices. Using these rules we can crop an image for display on a mobile device and have it oriented in landscape on a desktop without changing the HTML for the site. A good repository to see an all-inclusive list of media queries can be found here.

Fluid grids

Fluid grids help us to keep our web content responsive when we are using devices with dynamic screen sizes. These work by defining the maximum layout size for a design and then dividing the grid into a number of columns. The elements are then designed with proportional dimensions as opposed to pixel based ones, so that whenever the screen size is changed, the elements can adjust their dimensions in their parent container by the specific proportions of the device. Creating a fluid grid from scratch can be a time consuming task, but can be made simpler by using free CSS grid systems and generators.

Dealing with Images

Bitmap images (as opposed to vectors like the SVG format) are not fluid by default, and as such tend to have issues when scaled. This can cause an image to lose its visual appeal or even worse, its clarity and context. One way to tackle this is to size your images in relative units instead of pixel dimensions.

Relative units use percentages to help the image react to each device’s specifications. The most common way to use a relative unit is to set the ‘max-width’ property of the image to 100%. This simple CSS instruction will allow an image to display its own natural dimensions as long as there's enough real estate on the device’s screen. It will also cause an image to shrink in scale as its window narrows.

Touchscreen Considerations

It is important to consider that your site may be used on devices with a variety of inputs. Gone are the days when a mouse was the only viable input format for a website and with the influx of mobile devices, touchscreen inputs have become much more popular. The user experience can be improved on these devices by implementing small features such as performing “swiping” gestures to advance through a carousel of images or “pinching” to zoom into an infographic.

Act while you can!

Responsive design is all about the creation of websites that dynamically tweak themselves to ensure your users will receive the optimum User Experience (UX). Responsive websites eliminate the need to develop any other type of mobile website, thus making them a cost effective way to boost your website’s reach. Using fluid grids, media queries and flexibility, we can start to leverage ourselves to take advantage of the influx of mobile device users and potentially even generate higher revenues.

As the latest mobile technologies are introduced, the biggest businesses in the technology world are following suit. Even Google has recently recommended responsive web design and have announced they are checking if websites are mobile-friendly in their search results. This could negatively affect your website’s ranking if not implemented correctly, so why not start to make sure your website is mobile friendly today by using the helpful Infragistics Layout Manager tool!

Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics jQuery controls.

Top Resources for Build 2016

$
0
0

Weren’t able to make it to San Francisco for Microsoft’s Build 2016 conference? Don’t worry—Microsoft has made this year’s conference accessible to developers around the world!

First and foremost, it’s not too late to register for Microsoft’s Build mailing list. Once on the list, Microsoft will send you live updates and on-demand content directly.

Want to be able to watch the keynote speakers live? Microsoft even has you covered there. They will be live streaming keynote speakers starting at 8:30am PT. If you don’t have time to stream all speakers, check out Build’s full schedule to make sure you don’t miss out on your favorites.

If you’re unable to keep up via video, Reddit and Twitter are great resources to check in on the action. As usual, Reddit will host a live megathread throughout the event. In addition, plenty of Twitter accounts will be giving a live play-by-play of this year’s conference:

You can also keep up with the below hashtags:

On top of the standard social media outlets, plenty of blogs will be following the action. Ars TechnicaSoftpediaThe VergePC Advisor, and many others will be live blogging over the next few days.

Want to keep up with Infragistics at Build 2016? Follow Brian Lagunas (@brianlagunas), Ken Azuma (@kenazuma), Ambrose Little (@ambroselitte), and Jason Beres (@jasonberes) to see what we’re up to!

If you were lucky enough to make it out to Build this year, stop by booth 422 to say hi to Infragistics and check out the Ultimate experience! 

Infragistics Build 2016

Set-up Remote Unmoderated Usability Studies

$
0
0

When Indigo Studio’s product vision was first drafted, it was always intended to be a prototyping solution and not just prototyping tool. Our ultimate goal is to support the prototyping process for software based applications.

Rapid Prototyping Process = Design + Evaluation

We made Indigo Studio to rapidly design UI prototypes. Equally important to us is the ability to conduct usability studies with target users. It has been stated that theory without practice is empty, and practice without theory is blind. This is equally true for design(theory) and evaluation(practice).

While design reviews are one way to collect UX feedback, it's not a substitute for empirical feedback or findings based on usage. After all, we are expecting users to use the prototype and not just stare at it.

Today we are announcing the ability to set up remote and unmoderated usability studies for your Indigo Studio prototypes. Remote because your users can participate in the usability study from anywhere in the world, and unmoderated because users can participate whenever they want to without requiring a moderator to be present.

If you are new to usability studies, you should realize that a remote unmoderated study is no silver bullet. It’s simply one of many ways to evaluate designs. There are pros and cons, some of which are discussed in this article.

TL; DR

Reading not your thing? No problem. We recorded a quick video just for you.

[youtube] width="650" height="488" src="http://www.youtube.com/embed/athAhUx5Xyc" [/youtube]

Creating a New Usability Study

For creating a usability study for a prototype, you need to first share it on indigodesigned.com. I’m going to use a prototype that I have already shared, called Money app. Money app is a prototype that simulates a set of user stories related to managing daily expenses.

Select the create a new usability study option from the prototype details view to kick-off a brand new study.

Create new usability study

When you create a new study, it is automatically added to the “Usability Studies” area of indigodesigned.com. This way you can go there directly to review results for any studies in progress or to review older studies.

Defining Usability Tasks

A study has three main sections. The first section allows you to add some welcome page content, which is what your participants will see before starting the study. You can add any text material for your participants to read. The second and most important part is the section to define your usability study tasks. And finally, you can add some thank you page content that your participants will see when the study ends.

Adding Tasks

Adding content for the welcome page and thank you page is optional. However, you need at least one task defined to start the study.

Recording a Task Flow

The usability study can be conducted remotely and unmoderated. That means, any number of participants can take the study at the same time. So how you phrase your tasks, and the detail present in the prototype, may influence the outcome of the study.

To assist with this, as part of setting up a new task, we let you record the task steps/flow for completing that task. This way, you as a study moderator will know up front whether the prototype will indeed support the task, and does not lead participants to a dead end.

For my first task, I'm going to add the following:

  • Assume that you spent some money on a cup of coffee. Record this expense using the app.

Clicking on record a task flow launches the prototype and the recording interface. I will mark the UI state I am currently viewing as the start point. Of course, depending on the task, you can interact with the prototype and get the UI state you need before marking it as the start point.

Once you mark the start point, the service automatically records the steps leading to the finish point as you interact with the prototype. If you make a mistake, you can always undo the last step. Once you are done, save the task flow. And that’s it.

recording task flow

Using the same approach, I can define additional tasks for my study:

  • Change the budget you have defined for coffee from $45 to $65
  • Where will you look to find out how much you spent in the month of June this year?

Inviting Participants

To let participants take this study,  officially start this study, and use the invite option to get a direct link to the participate. You can share this link over email, IM or post it on a forum.

Invite Users

You can make changes to the study as long as you have no participants. Once participation begins, you cannot make changes. And when you have enough participants, you can close the study to prevent additional sessions.

How to Get Started

We have more articles planned about this awesome new capability. For instance, what the participants can expect to see when starting a study, and how to interpret the study results.

In the meantime, we have set up a top-secret site for you early birds called http://next.indigodesigned.com. It’s identical to the current site except this one has usability studies available today.

The idea is that you can conduct usability study with your existing shared prototypes. We look forward to any feedback you may have about what’s working, what’s not etc. When we publicly release this capability sometime in April, you will no longer need to be on next.indigodesigned.com to create studies. It will be merged.

All this just a start of something big. Big for your design practice, and big for the prototyping process.

Unlimited Studies, Unlimited Users. No Charge.

As long you have an active subscription for Indigo Studio, remote usability studies and all other features of indigodesigned.com are free!

Infragistics Windows Forms Release Notes – March 2016: 15.1, 15.2 Service Release

$
0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find these notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Windows Forms 2015 Volume 1 Service Release (Build 15.1.20151.2230)

Windows Forms 2015 Volume 2 Service Release (Build 15.2.20152.2052)

How to get the latest service release?

Viewing all 2398 articles
Browse latest View live