jQuerify your ASP.NET apps - Microsoft Community Tech Days 2010

Happy to say that I have presented at Microsoft Community Tech Days 2010 on "jQuerify your ASP.NET apps" at Microsoft Hyderabad. The event, organized by Microsoft User Group Hyderabad (MUGH), had 400+ audience in Developer and IT Pro track!!

jQuery based PPT

Presentation: Click here

Demos:  Click here

I chose to prepare my presentation using jQuery itself, instead of power point, which would itself be a demo. The response from the audience was really encouragingSmile

Note: I used a lot of plugins in my demos but had to tweak them to meet my needs. So please don't use the versions which I am using in my demos as-is. Suggest you to download the original versions from the urls in the plugin files.

It was a great experience interacting with several enthusiastic developers and geeks. I've personally learnt a lot! Hope to meet you'll again. Please write to me in case of any doubts about using jQuery with ASP.NET

Happy coding Smile

Enhancing scalability & maintainability of your JavaScript/jQuery code – JS Design patterns

Are you building a huge, jQuery-ASP.NET web application, which is subjected to a lot of global code changes in your client script? If yes, you should probably consider the below design pattern, which would help you have a better control on code execution.

Maintainability problem with large chunks of JavaScript code :

Imagine a web application having ~400 .js files. Each .js file would have code for AJAX calls/DOM manipulations for the respective .aspx page. But there would be some code which is redundant and can be moved to a common global function, which can be accessed by all .js files. With respect to ASP.NET paradigm, this can be compared to - your code behind partial class inheriting a base class. So whenever you have a global change, you can just change the base class and all .aspx files will have the changes.

However, there is a huge difference in the above comparison. ASP.NET has a powerful page life cycle which strictly defines which piece of code should be executed at what event. i.e., Pre Init, Init, Page Load, Pre Render etc. In the case of JavaScript, it is just dependent on ‘where you place your code’. i.e., even if you call a global function on line 1, developers can always add lines above and below it, which will mess up your code execution sequence.  This pattern is an attempt to have control on JS code execution, having Pre and Post events.

Solution- Using wrapper which can provide pre/post events:

Let’s say for every page, we need to prevent post back on click of any button. i.e., writing “e.preventDefault()” on button click events. I call this as a “Pre Event”, since this is to be executed before any click events are written.

Similarly, let’s say, for every page, we need to disable/show/hide certain controls, based on few conditions. I call this as a “Post Event”, since this code has to be executed at the end (similar to pre render event of asp.net model). So here is how we can try to enforce this.

Enhancing our chainable JavaScript library:

The following description is based on my previous article, building a chainable JavaScript library. For the global “Pre” and “Post” events, I’m going to extend the chainable JavaScript library with the “execute” method, like this:

(function(){
    var value='Hello world';
    var preInit=function(){
        console.log('pre Init');
    }   
    var preRender=function(){
        console.log('pre Render');
    }
    var mySpace=function(){
        return new PrivateSpace();
    }
    var PrivateSpace=function(){
 
    };    
    PrivateSpace.prototype={
        init:function(){
            console.log('init this:', this);
            return this;
        },
        ajax:function(){
            console.log('make ajax calls here');
            return this;
        },
        cache:function(){
            console.log('cache selectors here');
            return this;
        },
        setValue: function(newValue){
            value=newValue;
            return this;    
        },
        getValue: function(callbackFunc){
            callbackFunc.call(this,value);
            return this;    
        },
        execute:function(oldClass){
            preInit();
            var obj=new oldClass();
            obj.init.call(this);
            preRender();
        }
    }
    window.my$=mySpace();
})(); 

To explain the execute method, let’s consider the below code:

var CommonSearch=function(){
    var pageLoad=function(){
        console.log("page load operations here...");
    }
    return{
        init: pageLoad
    }
}

The above code is what resides in your ‘.js’ file. It is written in revealing module pattern. Here, CommonSearch is a constructor function and to execute it, we should say:

$(document).ready(function(){
    var objCommonSearch=new CommonSearch();
    objCommonSearch.init();
});

This does not have “Pre” and “Post” events. So if we need some global changes, we are lost! We have to manually change all 400 “.js” files. To troubleshoot this problem, if we use our my$.execute method, we can say:

$(document).ready(function(){
    my$.execute(CommonSearch);
});

Note that in the above snippet, we are not creating instance of CommonSearch constructor function. Instead, we are passing it to my$.execute, which internally calls the private “PreInit()” method first, then executes “init()” of  CommonSearch, and calls the private “preRender()” method. So, we are enforcing a particular sequence of code execution, which solves our problem.

[Note: As far as I know, 100% enforcement is impossible in JavaScript. If you are a JS Ninja, you can easily bypass whatever enforcement is set. That is the beauty of the language!!]

How this is different from using a common global function in document.ready, which can do the same functionality as my$.execute? Simple…we have good name spacing and chainability in our my$ function, which prevents conflicts with other modules.

If you are a JavaScript newbie, this article might sound a bit weird., but actually it is not so. I’m no JS guru to say this is the best approach. But this approach helped me a lot in scaling up easily and maintaining huge .js files, without any issues. If you can think of a better approach, please suggest. I’m waiting to learn..

Happy coding Smile

Understanding callbacks for chaining JavaScript methods

In continuation with my previous article on creating a chainable JavaScript toolbox, I would like to show how we can leverage JavaScript callbacks for enhanced functionality.

As said previously, this is how libraries are designed and the concept is nothing new. This is just an attempt to explain certain core features of JS libraries.

The core idea of chaining is to return 'this', which is an instance of the class, in each method of the class. In this way, each method passes an object to the next method and hence the chain continues endlessly.

However, not every method can return 'this' always. e.g., consider a method like 'getValue', which is expected to return a value. It obviously can't return 'this' and so the chain is broken. In such scenarios, callbacks come to our rescue.

Consider the below code, which is an extension of the code in my previous article (The private variable 'value' and two functions 'getValue' and 'setValue' are added):

(function(){
    var value='Hello world';
    var mySpace=function(){
        return new PrivateSpace();
    }
    var PrivateSpace=function(){
    };    
    PrivateSpace.prototype={
        init:function(){
            console.log('init this:', this);
            return this;
        },
        ajax:function(){
            console.log('make ajax calls here');
            return this;
        },
        cache:function(){
            console.log('cache selectors here');
            return this;
        },
        setValue: function(newValue){
            value=newValue;
            return this;    
        },
        getValue: function(callbackFunc){
            callbackFunc.call(this,value);
            return this;    
        }
    }
    window.my$=mySpace();
})(); 

The 'getValue' function is expected to return a value. If it returns a value, it can't continue the chain as it did not return 'this'. To overcome the problem, we can use callbacks.

The first line of 'getValue' uses "Function.call()", which is a predefined JavaScript method. All it does is, it executes any given function in a desired context. The first param of "Function.call" overrides the 'this' variable of calling function(callbackFunc) and the second param serves as input param to the calling function(callbackFunc).

So, to access our 'getValue()' function, we should say:

my$.getValue(function(value){ 
     console.log('value is: ',value)
 }); 

Doesn't it look like jQuery event handling/ajax functions syntax? Yes, jQuery makes heavy usage of callbacks for attaining chainability.

The getValue() function accepts a function as an input parameter, which is being executed using "Function.call()" internally. This function can be either anonymous or a named function(i.e., you can pass a function handler as input to "getValue").

Since we are returning 'this' in the definition of 'getValue', we can even chain it with other functions like:

my$.getValue(function(value){ 
    console.log('value is: ',value)
}).setValue('new value').init().ajax(); 

In the above code, we are achieving both chainability as well as returning data from our methods using callbacks. This solves the problem of breakage of JavaScript chains and hence it is very powerful.

This is something which really helped me in writing some critical utility functions in my project and thought it is worth sharing. Please let me know your suggestions.

Happy coding :)

Create a jQuery like chainable JavaScript toolbox specific to your project

"Global variables are evil" is what the JavaScript Guru Douglas Crockford says, as they are the source of unreliability and insecurity. How elegant your code would be if you wrap your entire project's code under a single global namespace?

[Did you know? The entire JavaScript code of Yahoo's website is accessible through a single global 'YAHOO' object!]

In this article, I would like to show how you can create a chainable JavaScript library(not a library exactly, but sort of a toolbox) specific to your project. The concept is nothing new., this is how libraries like jQuery are built. It is more about understanding certain design patterns in JavaScript.

The first thing to know is:

(function(){ 
    //your code here....
})(); 

This is nothing but a self executing anonymous function. All it does is, it simply executes whatever code you write inside it and disappears after that. The private variables declared inside this function are not exposed to the global scope, unless specifically attached to the window object.

The next thing to know is about Prototypal Inheritance in JavaScript. This is a huge topic in itself and the article assumes that the reader is familiar with this concept. The idea is, in our anonymous function, we would have a private function and prototype it with our custom functions.

This is how our JavaScript toolbox looks like: 

(function(){
    var mySpace=function(){
        return new PrivateSpace();
    }
 
    var PrivateSpace=function(){
    };    
 
    PrivateSpace.prototype={
        init:function(){
            console.log('init this:', this);
            return this;
        },
        ajax:function(){
            console.log('make ajax calls here');
            return this;
        },
        cache:function(){
            console.log('cache selectors here');
            return this;
        }
    }
    window.my$=mySpace();
})();

In the above code, "PrivateSpace" is a private function which is prototyped with our desired functions. "mySpace" is a function which returns a new instance of "PrivateSpace" when executed.

As said before, our anonymous function executes once and does not expose these functions to the outer world. To expose our functions under a namespace, we should add the instance of our "PrivateSpace" to a window level(global) object. This is exactly what the last line of the code does.

So, when we say

window.my$=mySpace(); 

we are executing "mySpace()", which returns a new instance of  "PrivateSpace()" function and assigning it to "my$", which is a window level object. So if you print my$ like:

console.log(my$);

, you would get all the functions present in the "PrivateSpace()" prototype. So you can call your functions like: my$.ajax(), my$.cache() etc.

Note that each function in "PrivateSpace()" returns "this". i.e., each function returns an instance of "PrivateSpace()" and hence you can chain your methods like:

my$.init().ajax();

That's it! Now we have our own JavaScript toolbox specific to our project! So no more global functions in our projects. As said, this is nothing new to the JavaScript world,  this pattern is what jQuery uses for chaining the methods.

I have faced few problems while opting for this pattern and posted them in StackOverflow. Folks there are kind to answer and hence this article. I'm no JS guru to say this is 100% perfect, but I have implemented this without any issues in a huge project.  You may refer Chapter 6 (Chaining) in  Pro JavaScript Design Patterns which explains about this jQuery like pattern.

Do you have better ideas? Please let me know.

Happy coding :)

When is AJAX an overkill for your ASP.NET-jQuery web applications? Part-2

In my previous article, I have discussed about few scenarios where AJAX can be an overkill for our web apps. I would like to add few more such scenarios in this post.

(4) Heavy dropdowns: 

A dropdown control enforces the user to select a value, preventing the entry of unwanted choices. This would make sense if a dropdown has limited options. But we tend to populate the dropdown with hundreds/thousands of options (probably this is the case with list of all cities in a country).

The scenario gets worse when we make an AJAX call to fill such huge dropdowns. There are many articles which show 'how to fill a dropdown using AJAX', but none of them stress on the limit of options. So, for a dropdown with 3000 options you are adding 3000 DOM elements, thereby degrading the performance of your page. Imagine 5 such dropdowns in a page!

Another scenario could be a row having a dropdown with 300 options, which repeats for ‘n’ times! These are unseen dangers which slow down your site.

Bottom line: Use dropdowns when you want to display limited(say ~50) elements. For anything more than this, use auto complete feature(You may choose between local/remote data based on number of records). The same rule holds good for cascading dropdowns. Keep your DOM as small as possible!

(5) Accordions: 

This is another beautiful UI technique almost similar to Tabs, the main difference being-tabs appear horizontally whereas accordions appear vertically. The problems faced in accordions are exactly the same ones which I have explained for tabs (please refer to my previous article).

Bottom line: If your page is too huge, do not fetch the entire DOM for building an accordion. On page load, build accordion structure and fetch the content of first pane only. On clicking the next header, remove markup from the previous pane and populate the current pane. This way, your page will always contain limited number of elements(though huge DOM manipulations can cause a little delay).

(6) Page load AJAX Calls/Multiple domains:

If you profile most of the web apps, they make a number of AJAX calls on page load for business functionality. e.g., If you are trying to fill 5 dropdowns on page load using AJAX calls, you are inviting performance problems.In most cases, we can get rid of such calls by injecting JavaScript objects into the page

Bottom line: If AJAX calls on page load are still unavoidable, instead of making multiple calls, make only a single call. In your web method, hit multiple controllers and fetch the desired response and club the responses using .NET’s Dictionary object. This way, you will reduce some network traffic.

You may also try to move some of your web services to different domains and access the data through JSONP.  The maximum number of parallel calls per domain in IE8/FF3 are 6. This means, by chance if you are making more than 6 AJAX calls, the remaining would be queued. By splitting your services across multiple domains, you can still make more than 6 calls in parallel!

On a closing note, I think I’m writing more of theoretical aspects without any code, but I don’t want to lose these experiences in the darkness of my memories. Following these simple guidelines helped me solve many performance bottlenecks in a large scale project. I feel this would be useful for developers at all levels while taking design decisions. Do let me know your views!

Follow me on twitter for live updates. Happy coding :)

When is AJAX an overkill for your ASP.NET-jQuery web applications? Part-1

AJAX libraries have simplified developer’s life by providing clean & easy-to-use API. Their usage is so simple that we developers over use it, without realizing the performance impacts. In this article, I would like to explain few scenarios in which AJAX can be an overkill for your web apps.

The motto is to help fellow developers take better design decisions at an early stage, rather than repenting and fine tuning later. Below are real time problems, faced in a large scale ASP.NET/jQuery AJAX based web app, which should be given a serious thought:

(1) AJAX based navigation: 

“Keep DOM manipulations to the minimum” is what every JavaScript library says, but over use of AJAX based navigation defeats this purpose and you may rethink the design, unless your client specifically wants it.

If you are wondering what AJAX based navigation means, check jqGrid demos site. There are no post backs at all even while navigation. Content pages are fetched via AJAX and injected into a parent page, with a huge DOM manipulation.

If the page is small with very less number of DOM elements, AJAX navigation is fine. As the number of elements increase in the page, injecting the page takes longer time and beyond a point, causes ‘stop running script’ error, as discussed in my previous article. Typically, business applications contain hundreds of controls in a page, causing severe performance bottlenecks.

If you see Facebook, the Home, Profile, Account etc links on the top do a full post back and fetch the page, while any other operation is an AJAX call, which is a cooler approach.

Bottom line: AJAX navigation has performance problems when pages are huge. But it can solve problems like maintaining state by storing data in DOM elements, reducing session variables and reducing load on the server. So weigh these choices before taking a call.

You may find the total number of elements in the page using the jQuery code, $(‘*’).length; So be cautious of this count while injecting a page. In a complex page like Yahoo.com, there are about 780 elements (each html tag corresponds to one element). Make sure your page is having not more than 1000 DOM elements. If the count is running into thousands, then split your pages.

(2) Client side templating:

If you liked asp.net repeater and are looking for client side templating, hold on! There is a difference between asp.net repeater and client side templates.

In the case of a repeater, processing takes place on the server and not much load is on the browser. However, in the case of templates, processing as well as injection is on the client. Imagine templating 100 rows, with each row containing 30 elements. You end up having 3000 elements which is alarming!

I can give you the example of twitter.com or facebook.com. Both have a ‘more’ sort of button at the bottom, which fetch more records. What happens if you want to see the posts of last ten days? You end up with thousands of DOM elements and your browser slows down.

Bottom line: In terms of performance, what is apparent to the developer is only the time taken to process the template. But what is hidden is, the time taken for clearing events, handling memory leaks, cleaning missing tags and injecting the template. All this happens in jQuery’s .html() method.

So, if you want to template huge data, make sure you are implementing pagination. Again, as in the above case, $(‘*’).length is the key.

(3) Tabs:

Thanks to this fancy UI technique, which gives a wizard sort of appearance to the content. If you are looking only at the fundo part of it, you are getting into problems! The scenario gets worst when you have AJAX tabs which fetch huge pages.

Let’s say each page has ~700 elements. So if you have 5 tabs, you are having ~3500 elements. Imagine having blur, click, live events for so many elements. It’s a complete mess! Also, you will be running into context related issues, since 2 tabs can have 2 different controls with the same ID. When you are on an active tab, the content of the rest of the tabs is only hidden, but not removed. So your app’s performance is bad yet again.

Bottom line: If you want to use tabs with optimal performance, make sure you are clearing the mark up of the tabs which are not active. At any point of time, make sure your $(‘*’).length is always less than 1000, for better results.

I think I have covered quite a lot. I’m still facing several bottlenecks in my beautiful project and trying to figure out the solutions with my architect. Will cover more scenarios in my next article.

Happy coding :)

Update: This second part of this article is continued in my next blog post.

The curious case of "Stop running script" error & jQuery

Before starting with the article, I would like to share something which encouraged me a lot. Now I'm a Microsoft Most Valuable Professional (MVP) for ASP.NET. Thanks to Microsoft folks for recognizing my efforts. :)

 

Coming to the point., have you ever faced the below “Stop running script” error message in your thick client web apps? This is one of the most frustrating errors, which hangs the browser, spikes CPU usage and slows down your operations.

 

 

Nicholas C. Zakas has an excellent article on why it occurs in various browsers. In short, his research says that the error occurs in various browsers due to exceedingly high number of operations taking place(~5 million statements in IE), or due to script executing for a very long time(~10 secs in FF).

For best performance, Nich says that no script should take longer than 100 ms to execute on any browser, at any point of time.

 

Now, why should jQuery developers worry about this?

They should, because jQuery is made of nothing but JavaScript and chances of getting this error are more, if you don't understand the core methods properly. Let's see in detail what this means.

 

Take the below script as example. Execute in Firebug console or in IE8 script panel or simply copy/paste in a html file and open it.

(function exec(){ 
    var str=''; 
    for(i=0;i<10000;i++) 
    { 
        str+='<div>test div '+i+'</div>'; 
    } 
    $('body').append('<div id="TestDiv"></div>');
    $('#TestDiv').html(str); 
})();

Note: The above code might crash your browser. So please try in stand alone instance. If you are not getting the error or experiencing different behaviour, probably you have better CPU which does not spike up to 100% for this code. The point here is about wrong usage of code. So increasing the max condition should give the error. This analysis is as per jQuery version: 1.3.2.

 

What I'm doing here is pretty straight forward. Just looping and creating 10000 elements and injecting them into the DOM. Now, what's so important here?

It's just a simple piece of code. 10000 operations in a loop is way beyond the threshold of 5 million operations. When you run this code for the first time, browser stops responding and when you run this for the second time, you get the 'stop running script' error.

 

This might sound silly at a first glance. Such huge loops will obviously cause such errors. But what if you are doing this in your code without your knowledge? Do you know that this error occurs in several facebook apps & in twitter? There is something beyond the loop.

 

$().html vs element.innerHTML:

Replace the line:

$('#TestDiv').html(str);

with this one:

$('#TestDiv')[0].innerHTML=str;

and now try. We are using native JavaScript's innerHTML to inject DOM elements. This is faster than jQuery's .html() and hence no error.

 

Does this mean this is the mistake of jQuery?? No! It's purely developer’s ignorance. First of all, such huge DOM manipulations should not be made (This is commonly used, unknowingly.). Then, you should be aware of what .html() does.

 

$(‘selector’).html() internally removes event handlers attached to every child element in the selector’s DOM tree , cleans up the incoming mark up by adding unclosed tags and then injects the new mark up. So for the first time, since no DOM elements were there, .html() only cleans the new mark up and injects it. For the second time, it has the additional task of removing the event handlers and hence the number of operations are increased, giving the error.

 

So when should I use .innerHTML and when should I use .html()?

Genuine doubt! Use .innerHTML if you are SURE that you have to JUST replace the mark up, provided your mark up does not contain any events attached to it. Use jQuery’s .html() when you want to unbind events attached to elements and take care of garbage collection/memory leaks. (You may refer to “jQuery cookbook” for more info on this).

 

This is not the only pitfall. JavaScript’s native for(;;) loop is faster than jQuery's $.each() loop. So before enjoying the benefits of the library, analyze the bottle necks too.

 

(Q) When does such scenarios arise? Why would someone loop some 10000 times in their code?

(A) Though practically no developer loops ten thousand times in his code, knowing that it’s a performance issue, people tend to make this mistake unknowingly. The analogy here is about larger DOM manipulations. I shall explain such scenarios in my next article.

 

Happy coding :)

Web based presentation tools for web devs

The term "presentation" has almost become synonymous with Microsoft Office PowerPoint. Yup, without doubt it is the sole leader among s/w for presenting stuff and is improving in every version, the only problem being - difficulty in sharing the presentation.

With the emergence of Web 2.0 sites like SlideShare.net, sharing ppt's on the web has become quite easy. There are several web based alternatives which are gaining popularity these days. They have the benefit of creating/editing/sharing the presentations entirely on the browser.

However, for web devs, having more flexibility on code would give better control, as they can show code/inline demos in the presentation itself! John Resig's tutorial on Advanced JavaScript is an excellent example for the powerful inline demos.

There are XHTML based tools which have pre-built slide show features, the famous ones being: S5 (A Simple Standards based Slide Show System) and HTML Slidy. They provide many features like navigating using keyboard shortcuts, mouse clicks etc., simulating a powerpoint presentation.

With the advent of jQuery, things have become even more easier! The recent release of jQuery 1.4 had a slideshow based on jQuery code, which was awesome. 

I had my own presentation template prepared using coda slider effect and it got good feedback :)

Now there is jQuery presentation plugin specifically for creating presentations! It came floating in my flood of tweets, thanks to Twitter & Trevor Davis (author of the plugin)!

By the way, if you want to prepare a simple slide show which slides automatically across .html files, don't struggle with complex JavaScript code. Just use the META refresh tag. Change the "URL" attribute of the tag so that it points to the next slide(html page). After the time interval set in "content" attribute is reached, automatic redirection to next slide takes place.

So lot's of interesting options for presenters/modern web devs! Let's wait for HTML5 for more hi-fi stuff like displaying 3D transitions & making the most of canvas element in the presentation itself!

Hope this article interests budding presenters like me :) Happy presenting! Smile

Introduction to ASP.NET AJAX Library Beta - Microsoft Virtual Tech Days

Happy to say that I have presented today in Microsoft Virtual Tech Days(VTD) on ASP.NET AJAX Library(BETA)Smile

You can download the presentation here: http://www.slideshare.net/novogeek/introduction-to-asp-net-ajax-librarybeta

I have also uploaded working demos. You can find them here: http://labs.novogeek.com/VTD18Mar2010/index.htm

Regarding the demos, I have converted the aspx files to .htm files, so that you can simply save the demos from the browser. (Same code as shown in VTD).

Please let me know in case of any issues. Thank you once again for the warm response Smile

Overriding jQuery/JavaScript functions using closures

Function overriding is an important feature in any programming language. In advanced languages like .NET, Java, it can be accomplished easily through a set of key words. But this is not the same in JavaScript and this is where closures come to your rescue.

This article is not something new. Infact, it is a part of every JavaScript geek's  toolkit, which does not have much exposure. So just wanted to throw some light on it. (If you are not very clear of closures, please visit this article and come back).

Let us take a vey simple example. Assume that you have a button, for which a click event is bound.

<input type="button" id="btnTest" value="Test Button"/>

var originalClick=function(e){ 
    e.preventDefault(); 
    console.log('original button click'); 
}; 
 
$(document).ready(function() { 
    $('#btnTest').click(originalClick); 
}); 

Now, if there is some requirement change and you need to add extra functionality to this button click, then instead of messing up its method, you can extend the originalClick method using closures, like this:

(function() { 
    var extendedClick= originalClick; 
    originalClick= function() { 
        console.log('Extending Test Button click'); 
        return extendedClick.apply(this, arguments); 
    }; 
})(); 

The above code is nothing but a self executing anonymous function (gets executed immediately after its definition). In the first line, we are storing the actual function in a variable 'extendedClick' (we'l use this later). Then we are creating a function with the same name as our original button click function (originalClick).

In the inner function, we are writing our extended logic and executing 'extendedClick' using .apply() method. (passing 'this' as first param to .apply() makes sure that the context is still the same as original method). So the entire code would be:

var originalClick=function(e){
    e.preventDefault();
    console.log('original button click');
};
 
(function() {
    var extendedClick= originalClick;
    originalClick= function() {
        console.log('Extending Test Button click');
        return extendedClick.apply(this, arguments);
    };
})(); 
 
$(document).ready(function() {
    $('#btnTest').click(originalClick);
}); 

Thats it! Your method is now overridden!! Ok, now let us see what exactly happened.

As per the definition of closure, we have 2 functions which are nested and extendedClick is a local variable. The outer function is a self executing one. During its execution, ‘extendedClick’ is obviously available inside the inner function ‘originalClick’, since it is defined in the parent context. The magic of closure comes after the outer anonymous function is executed. How?

When you click on button, the overwritten ‘originalClick’ function will be fired, which will print the console statement ‘Extending Test Button click’. Then the ‘extendedClick’ function, which has a copy of the first ‘originalClick’ function, will be fired. But why is it still accessible? The variable ‘extendedClick’ is private to the outer anonymous function, which means, it should have been destroyed after the outer function executed. But it is still accessible, which is the power of closure.

(Q) How can I override core jQuery methods using this concept?
(A) Ben Nadel
explained it clearly in his article, which I happened to see while I started this article. It is the same concept, but still I wanted to write and explain in layman’s terms. Thanks to Ben for his great blog posts and for quick turn out in times of need.

Using this technique, you can override even $(document).ready function! This would help you the most when you are in a maintenance project and have to do bulk changes in code with minimal impact.

Happy coding :)