development

Altrepreneurial vs Entrepreneurial and Why I am going to Work with Al Jazeera

By Mark Boas

A few weeks ago I was asked to write a bio – explain a bit about myself in a few lines. I hate writing bios, all that faux third-personage – I never really know what to write.

The reason a bio was required was that I’d been offered a position as a Knight-Mozilla sponsored fellow with Al Jazeera – the happy end-result of a Knight-Mozilla news challenge that I have been actively enjoying over the last few months. It was agreed that I would work slightly less than full-time hours and on a largely remote basis and so I was glad to accept the fellowship knowing I could still dedicate time to my family and other interesting projects (not that I would classify my family as a project you understand). Working with Al Jazeera is of course an experience not to be missed. I couldn’t have imagined that my life would take such a turn a year ago or so, but I’m very glad that it did and I put it all down to my (broadly directionless) approach to working life.

I consider myself very lucky that because of my sedentary lifestyle I need very little money to get by. This has allowed me to follow my interests while working for our small ‘company’ Happyworm for the last 10 years. I consider this a very real success. We created a popular open source project and foster a fairly large community — from this many opportunities have arisen. Recently for example, I was asked by the W3C to run an online audio/video course and this has been a fantastic experience and now I have the opportunity to find out the world of journalism works and hopefully contribute. All this would probably not have occurred if I hadn’t simply thrown caution to the wind and followed by interests.

There are two of us now working for Happyworm, we used to be three but our web designer (also my partner) decided that at least one of us should have a steady income and generously offered to take up full-time work at a local council. So it’s just the two of us working with 3rd party designers when we need them. I’m based just outside Florence and Mark P’s in the heart of Edinburgh, although we are very different, we have similar requirements and a similar history – Mark P left a well paid job as a CMOS camera chip designer to come and work on web stuff.

This finally brings me to the title of this post, when writing my bio I foolishly used a little known, possibly non existent word, by describing Happyworm as tiny altrepreneurial web agency.

I stumbled on the word altrepreneur some years ago here : http://www.mylifecoach-online.com/MLC%20newsletter%20May%202005.htm#authentic

“The Altrepreneur, like their colleague the Entrepreneur, runs one of the 3 Million Micro Businesses in operation in the UK today. However unlike the Entrepreneur, with a financial and career focus, the Altrepreneur is doing it for entirely different reasons.

It seems that 70% of those small businesses are being run because the owner/operator is focused on achieving a change in their life-style through running a small business, they are looking to increase their overall quality of life by putting in some up front hard graft.

This goes hand in hand with the growing movement around Authenticity (…). The idea that the source of much tension in our lives is the conflict between our true selves and the roles that we play. Getting in touch with your true self and letting go of that tension will lead to a very different kind of life.”

This article chimed very strongly with me – my objectives and the decisions I made to leave a well paid job, set up Happyworm, move to another country, be my own boss and follow my own interests where possible. We now have two children, who I am fortunate enough to see a lot of. Recently I decided to look after the 9 month old Anna in the mornings for 4 days of the working week and then work from 14:30 until midnight with a 3 hour or so (I don’t time it) break for family dinner and games. I grow vegetables, cook at least once a day and am involved in the local community centre – finally I feel like I am achieving that mythical work-life balance.

So to me the word altrepreneurial seemed a perfect concise way of explaining what I did and how I see Happyworm. Incidentally Happyworm turned 10 years old in October. You might think 10 years is pretty good going for a small company but the truth is we would never have lasted so long if we were in it for the money, we’ve had good spells but also our fair share of dry spells where we worked on open source, brewed our own beer and patched our own clothes (or at least I did). Turns out following our interests and making jPlayer has been much more of a success than we could have imagined, I think we’re approaching half a million downloads and perhaps the best measure – a community of around two and a half thousand.

Despite the money, like most people I’m not really happy working long hours on projects I’m not interested in (although through necessity I’ve done my fair share), I don’t think actually I’m any good at something my heart isn’t in. Maybe I’ve been spoiled, but the most important thing for me is to enjoy my work and so life, the money is always a secondary consideration and that’s why, contrary to what you may see written in my bio, I work for a small altreprenurial web agency – not entrepreneurial. Damn you auto-correct!

This notebook has been lying on my desk for the best part of a decade.

This notebook has been lying on my desk for the best part of a decade.

Some perspectives on the Knight-Mozilla News Technology Fellowship :

My Life as a Startup – The Road not Taken by The Guardian fellow – Nicola Hughes

Journalism in the Open: the 2011/12 Knight-Mozilla Fellows by Project Header Upper – Dan Sinker

Knight-Mozilla names news technology fellowship winners Journalism.co.uk

Tags: , , ,

Friday, December 16th, 2011 development, journalism, jPlayer Comments Off

P2P Web Apps – Brace yourselves, everything is about to change

Mark Boas

The old client-server web model is out of date, it’s on its last legs, it will eventually die. We need to move on, we need to start thinking differently if we are going to create web applications that are robust, independent and fast enough for tomorrow’s generation. We need to decentralise and move to a new distributed architecture. At least that’s the idea that I am exploring in this post.

It seems to me that a distributed architecture is desirable for many reasons. As things stand, many web apps rely on other web apps to do their thing, and this is not necessarily as good a thing as we might first imagine. The main issue being single points of failure (SPOF). Many developers create (and are encouraged by the current web model to create) applications with many points of failure, without fallbacks. Take the Google CDN for jQuery for example – no matter how improbable, you take this down and half the web’s apps stop functioning. Google.com is probably the biggest single point of failure the world of software has ever known. Not good! Many developers think that if they can’t rely on Google, then who can they rely on? The point is that they should not rely on any single service, and the more single points you rely on, the worse it gets! It’s almost like we have forgotten the principles behind (and the redundancy built into) the Internet – a network based on the nuclear-war-tolerant ARPANET.

I believe it’s time to start looking at a new more robust, decentralised and distributed web model, and I think peer-to-peer (P2P) is a big part of that. Many clients, doubling as servers should be able to deliver the robustness we require, specifically SPOFs would be eliminated. Imagine Wikileaks as a P2P app – if we eliminate the single central server URL mode and something happens to wikileaks.com, the web app continues to function regardless. It’s no coincidence that Wikileaks chose to distribute documents on bittorrent. Another benefit of a distributed architecture is that it scales better, we already know this of course – it’s one of the benefits of client-side code. Imagine Twitter as a P2P web app, imagine how much easier it could scale with the bulk of processing distributed amongst its clients. I think we could all benefit hugely from a comprehensive web based P2P standard, essentially this boils down to browser-built-in web servers using an established form of inter-browser communication.

I’m going to go out on a limb here for a moment, bear with me as I explore the possibilities. Let’s start with a what if. What if, in addition to being able to download client-side JavaScript into your cache manifest you could also download server-side JavaScript? After all what is the server-side now, if not little more than a data/comms broker? Considering the growing popularity of NoSQL solutions and server-side JavaScript (SSJS) it seems to me that it wouldn’t take much to be able to run SSJS inside the browser, granted you may want to deliver a slightly different version than runs on the server, and perhaps this is little more than an intermediary measure. We already have Indexed Database API on the client, if we standardise the data storage API part of the equation, we’re almost there, aren’t we?

So let’s go through a quick example – let’s take Twitter again, how would this all work? Say we visit a twitter.com that detects our P2P capabilities and redirects to p2p.twitter.com, after the page has loaded we download the SSJS and start running it on our local server. We may also grab some information about which other peers are running Twitter. Assuming the P2P is done right, almost immediately we are independent of twitter.com, we’ve got rid of our SPOF. \o/ At the same time if we choose to run offline we have enough data and logic to be able to do all the things we would usually do, bar transmit and receive. Significantly load on twitter.com could be greatly reduced by running both server-side and client-side code locally. To P2P browsers, twitter.com would become little more than a tracker or a source of updates. Also let’s not forget, things run faster locally :)

Admittedly the difference between server-side and client-side JS does become a little blurred at this point. But as an intermediary measure at least, it could be useful to maintain this distinction as it would provide a migration path for web apps to take. For older browsers you can still rely on the single server method while providing for P2P browsers. Another interesting beneficial side-effect is that perhaps we can bring the power of ‘view-source’ to server-side code. If you can download it you can surely view it.

The thing that I am most unsure about is what role HTTP plays in all this. It might be best to take an evolutionary approach to such a revolutionary change. We could continue to use HTTP to communicate between clients and servers, allowing the many benefits of a RESTful architecture, restricting the use of a more efficient TCP based comms to required real-time ‘push’ aspects of applications.

It seems to me that with initiatives such as Opera Unite (where P2P and a web server are already being built into the browser) and other browser makers looking to do the same, this is the direction in which web apps are heading. There are too many advantages to ignore: robustness, speed, scalability – these are very desirable things.

I guess what is needed now is some deep and rigorous discussion and if considered feasible, standards to be adopted so all browsers can move forward with a common approach for implementation. Security will obviously feature highly as part of these discussions and so will common library re-use on the client.

If web apps are going to compete with native apps, it seems that they need to be able to do everything native apps do and that includes P2P communication and enhanced offline capability, it just so happens that these two important aspects seem to be linked. Once suitable mechanisms and standards are in place, the fact that you can write web applications that are equivalent or even superior to native apps, using one common approach for all platforms, is likely to be the killer differentiator.

Thanks to @bluetezza, @quinnirill, @getify, @felixge, @trygve_lie, @brucel and @chrisblizzard for their inspiration and feedback.

Further reading : Freedom In the Cloud: Software Freedom, Privacy, and Security for Web 2.0 and Cloud Computing, The Foundation for P2P Alternatives, Social Networking in the cloud – Diaspora to challenge facebook, The P2P Web (part one)

Tags: , , ,

Monday, March 7th, 2011 development, HTML5, javascript, P2P, Web Apps 26 Comments

A few HTML5 questions that need answering

Mark Boas

Challenge Accepted :)

Earlier this week Christian Heilmann posted a talk and some interesting questions on his blog. Perhaps foolishly I decided to accept the challenge of answering them and what follows are my answers to his questions. My hope is to stimulate a bit of civilised debate and so I hope that people challenge what I have written. I’ve tried not to be too opinionated and qualify things with ‘I think’, ‘probably’ etc where possible, but these are inevitably my opinions (and nobody else’s) and should be interpreted as just that.

I think Christian has done a great job putting these questions together – if they make other people think as much as they made me think they’ve certainly been worthwhile. Like many of life’s great questions the answers don’t always boil down to yes and no. In fact usually there is a grey area in which a judgment call is required, and so I apologise in advance if some of the answers are simply ‘it depends’.

Can innovation be based on “people never did this correctly anyways”?

Yes, I’d say that innovation can be based on almost anything. It’s probably better if innovation is based on solid foundations, however I believe the web is more of a bazaar than a cathedral and so solid foundations aren’t always feasible.

If we look at some older browsers where let’s say the spec was interpreted in an unexpected way, take specifically IE5.5 and the CSS box model – hacks are required to get around its ‘creative’ implementation and due to the huge win of being able to write cross-browser web-pages with the same markup, hacks are almost always found. Often hacks by their very nature will be ugly, but this can help serve to remind us of their existence.* Once the hack is found innovation continues again at its usual pace. I’d argue that innovation on the web is built on a series of (often ugly) hacks.

To answer the question in the context of HTML5′s tolerance for sloppy markup, it’s interesting to note that HTML5 does allow you to markup as XHTML if you so choose, personally and maybe because I’m over fastidious, this is what I try to do. Whether the browser picks up my mistakes or just silently corrects them is another issue. Modern editors’ text highlighting helps a lot but if you are taking your markup seriously you may want to validate it. At the end of the day innovation will take place regardless.

*http://tantek.com/log/2005/11.html

Is it HTML or BML? (HyperText Markup Language or Browser Markup Language)

I think HTML should be accessible to more than what we traditionally view as the browser, yes. From an accessibility standpoint it should be a responsibility to make sure that pages are reasonably accessible from a range of devices such as screenreaders. What ‘reasonably accessible’ means could be cause for considerable debate. Additionally it is in most people’s interest to make their pages readable by a search-engine bot. Some developers may find that they want their pages to be parseable in which case they should probably mark them up in XHTML.

I think though, that the most effective way to encourage people to mark up their pages for devices other than browsers is to make clear the benefits of doing so. Search bots and screenreaders will eventually catch up and I think at that point we will be in a better place.

On a side note hypertext is what the web is founded on – it’s the essence of the web if you like. If you stripped down the web to just hypertext you’d still have a web, albeit a limited version of what we have today. Hypertext is really just text or documents endowed with the ability to link to other documents. This ability to link is of course extremely powerful and enables anyone to create a web page that is instantly ‘part’ of the web. Some companies are starting to rely less on the humble hyperlink and invent markup for themselves that relies on JavaScript to actually work, Facebook’s FBML is an example of this and I would perhaps put this ‘extension’ of HTML in the BML camp.

But while so much has been added to HTML, the hyperlink still remains and developers are encouraged to use it, even if in some cases it is just a fallback for more complex interactions. Only the other day we experienced the fallout from Google’s hashbang method of ajaxifying a link, I feel that a large part of this stemmed from the fact that developers felt that they no longer had to populate their hrefs, but in reality it is still highly recommended they do. So no matter how complex a web application becomes I’d argue that it should always be based fundamentally on the good old hyperlink in which case, strictly speaking it would be HTML.


Should HTML be there only for browsers? What about conversion Services? Search bots? Content scrapers?

Some would argue that information published publicly on the web is there to be consumed by however or whatever chooses to consume it. However a semantic web not only makes content more accessible to humans but also to algorithm running computers that seek to diseminate and categorise the wealth of information out there, this new information could then be presented back to a human and so the wheel turns and the machine cranks on. Again I feel developers should be shown the benefits.

Touching again on accessibility, laws exist and I think rightly so, to mandate that certain websites should be accessible to all, just as they do for certain buildings especially public ones. Whether your house or say a merry-go-round should be accessible is less clear. During the development of jPlayer we decided to make elements of it accessible because we felt it was the right thing to do, but the very first versions weren’t. The ‘right-thing-to-do’ can often be more a powerful motivator than any rule, especially if it is difficult to enforce.


Should we shoe-horn new technology into legacy browsers?

I think we should aim to support as many legacy browsers as reasonably possible – yes. What is ‘reasonably possible’ has always been a contentious issue but I think most developers broadly agree on what this means.

A key decision is how consistent you want each browser’s interpretation of your web page or application to be. Note that the experience will never be identical across all browsers, even the modern ones, so a certain degree of comprimise is always required. I really just view the shoe-horning of new technology into legacy browsers as just another hack, although instead of correcting a misimplemented feature its aim is to augment the legacy browser’s feature set to more closely match newer browsers.


Do patches add complexity as we need to test their performance? (there is no point in giving an old browser functionality that simply looks bad or grinds it down to a halt)

If we take patches to mean any code styling or markup that can adequately add missing functionality to a browser, (are also referred to as shims, shivs or more recently polyfills), yes patches do add complexity, but well written patches that can easily be dropped into a project should mask the complexities from the web developer. You could argue that patches are essential if we are to start using new technologies like HTML5 in the wild. Again a decision needs to be made as to whether a patch slows down a browser unacceptedly, whether to fallback to a less complete solution for legacy browsers or whether not to use the new technology at all yet.


How about moving IE fixes to the server side? Padding with DIVs with classes in PHP/Ruby/Python after checking the browser and no JS for IE?

The trouble with moving fixes to the server-side is that, in my experience at least, front-end developers and especially designers (whose lives these fixes affect) like to be able to develop applications with as little contact with the server as possible. Some web pages don’t even require any real server-side interaction and so we would be creating an added burden to the designer or developer’s development cycle. That said if the web developer could continue to work as usual on a web application that relied on the server-side anyway, safe in the knowledge that the server would take care of any differences then I suppose that would be one possible solution.

Another issue is server-side solutions tend to be less transparent and this could cause unneccessary resistance to those who might seek to change and evolve these fixes. You would also have to deal with the situation where several server-side fixes existed (PHP/Ruby/Python etc), the exception perhaps being mods to the actual webserver (although webservers do vary).

It is true that developers or designers are unlikey to work without a webserver these days but on balance I think there are a lot of issues to be resolved before all but the simplest fixes get moved to the server-side. So I’d say some IE fixes on the server-side could well make sense, anything more complex probably isn’t warranted.


Can we expect content creators to create video in many formats to support an open technology?

I don’t think we can expect content creators to create a video in many formats to support a certain technology just because it is open. It’s more likely to depend on the content creators and the market that they are appealing to – whether they are creating a true native cross-platform solution or whether they are willing to fallback to say Flash to deliver content that a browser doesn’t natively support. I think this is an area where some sort of server-side video format conversion module could be useful in encouraging developers to support native video in every browser. Ironically any tool that converts to or from certain types of encoded video could be subject to royalties (I’d love to be proved wrong here). A content creator could decide to support only open video formats but for example if H.264 is assumed not to be open, the big question is how do you create a video fallback for iOS whose browser doesn’t support Flash or similar.


Can a service like vid.ly be trusted for content creation and storage?

The question that immediately springs to mind is what happens if vid.ly goes down? You are essentially introducing another point of failure and it’s probably a good idea to keep these to a minimum, depending of course on how crucial it is that that content is always available. You could also argue that depending on CDNs to host your favourite JavaScript library is a bad idea so I guess it’s a judgment call on how likely it is that a particular service is to go down. Leaving aside the issue of reliability, what happens if a service like vid.ly gets bought out or hacked or decide to change their policy? All these are important questions that we’d probably do well to ask ourselves before relying on such a service.

Content creation could be a winner though, in the sense that it would be nice to upload one format and several other formats be made available to you. It’s a bit unfashionable but I sometimes wonder whether the WordPress model of being able to download and drop in a useful module on to your LAMP stack makes more sense. I’d love to be able to host my own personal version of vid.ly for use on my website only – it also makes sense from a distributed computing point of view.


Is HTML5 not applicable for premium content?

If we take premium content to mean content available for a fee, it is certainly more difficult to protect that content from being downloaded with HTML5. Being able to view source is arguably one of the most important aspects of open technology and HTML5 is a group of open technologies. However there is almost always a way to grab premium content whether it is based upon open technology or not, so at the end of the day we’re really talking about how difficult you make this.


Mark B

Tags:

Thursday, February 17th, 2011 AJAX, configuration, CSS, development, HTML, HTML5, javascript, SEO 1 Comment

Add a Stylish Audio Player to your Blog Without using Plugins

Mark Boas

Admittedly, it’s a little crude, but it’s simple and it works.

If you’ve ever wanted to add an audio player to your blog post and didn’t want to go to the trouble of installing a specific plugin to do so, you can always embed jPlayer within an iFrame. The one-liner you need to achieve this would probably look a lot like this:


which could give you something like this :

Sure, it’s not that efficient, you may end up loading jQuery twice if it’s already included as part of your blog, but there’s the added advantage that the page will render progressively and you have a nice little sandbox to play in should you just want to alter the player.

I used to hate iFrames and shudder at their very mention, but I must admit I’m coming around to them.

Mark B

A Simple and Robust jQuery 1.4 CDN Failover in One Line

Mark Boas

Google, Yahoo, Microsoft and others host a variety of JavaScript libraries on their Content Delivery Networks (CDN) – the most popular of these libraries being jQuery. But it’s not until the release of jQuery 1.4 that we were able to create a robust failover solution should the CDN not be available.

First off, lets look into why you might want to use a CDN for your JavaScript libraries:

1. Caching – The chances are the person already has the resource cached from another website that linked to it.

2. Reduced Latency – For the vast majority the CDN is very likely to be much faster and closer than your own server.

3. Parallelism – There is a limit to the number of simultaneously connections you can make to the same server. By offloading resource to the CDN you free up a connection.

4. Cleanliness – Serving your static content from another domain can help ensure that you don’t get unnecessary cookies coming back with the request.

All these aspects are likely to add up to better performance for your website – something we should all be striving for.

For more discussion of this issue I highly recommend this article from Billy Hoffman : Should You Use JavaScript Library CDNs? which includes arguments for and against.

The fly in the ointment is that we are introducing another point of failure.

Major CDNs do occasionally experience outages and when that happens this means that potentially all the sites relying on that CDN go down too. So far this has happened fairly infrequently but it is always good practice to keep your points of failure to a minimum or at least provide a failover. A failover being a backup method you use if your primary method fails.

I started looking at a failover solution for loading jQuery1.3.2 from a CDN some months ago. Unfortunately jQuery 1.3.2 doesn’t recognise that the DOM is ready if it is added to a page AFTER the page has loaded. This made making a simple but robust JavaScript failover solution more difficult as the possibility existed for a race condition to occur with jQuery loading after the page is ready. Alternative methods that relied on blocking the page load and retrying an alternative source if the primary returned a 404 (page not found) error code were complicated by the fact that the Opera browser didn’t fire the "load" event in this situation, so when creating a cross-browser solution it was difficult to move on to the backup resource.

In short, writing a robust failover solution wasn’t easy and would consume significant resource. It was doubtful that the benefit would justify the expense.

The good news is that jQuery1.4 now checks for the dom-ready state and doesn’t just rely on the dom-ready event to detect when the DOM is ready, meaning that we can now use a simpler solution with more confidence.

Note that importantly the dom-ready state is now implemented in Firefox 3.6, bringing it in line with Chrome, Safari, Opera and Internet Explorer. This then gives us great browser coverage.

So now in order to provide an acceptably robust failover solution all you need do is include your link to the CDN hosted version of jQuery as usual :

<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script>

and add some JS inline, similar to this:

  <script type="text/javascript">
  if (typeof jQuery === 'undefined') {
     var e = document.createElement('script');
     e.src = '/local/jquery-1.4.min.js';
     e.type='text/javascript';
     document.getElementsByTagName("head")[0].appendChild(e);
  }
  </script>

You are then free to load your own JavaScript as usual:

<script type="text/javascript" src="my.js"></script>

It should be well noted however that the above approach does not "defer" execution of other script loading (such as jQuery plugins) or script-execution (such as inline JavaScript) until after jQuery has fully loaded. Because the fallbacks rely on appending script tags, further script tags after it in the markup would load/execute immediately and not wait for the local jQuery to load, which could cause dependency errors.

One such solution is:

<script type="text/javascript">
  if (typeof jQuery === 'undefined') 
     document.write('script type="text/javascript" src="/local/jquery-1.4.min.js"><\/script>');
</script>

Not quite as neat perhaps, but crucially if you use a document.write() you will block other JavaScript included lower in the page from loading and executing.

Alternatively you can use Getify‘s excellent JavaScript Loading and Blocking library LABjs to create a more solid solution, something like this :

  <script type="text/javascript" src="LAB.js">
  <script type="text/javascript">
  function loadDependencies(){
  $LAB.script("jquery-ui.js").script("jquery.myplugin.js").wait(...);
  }

  $LAB
  .script("http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js").wait(function(){
  if (typeof window.jQuery === "undefined") 
     // first load failed, load local fallback, then dependencies
     $LAB.script("/local/jquery-1.4.min.js").wait(loadDependencies);
  else 
     // first load was a success, proceed to loading dependencies.     
     loadDependencies();  
  });
  </script>

Note that since the dom-ready state is only available in Firefox 3.6 + <script> only solutions without something like LABjs are still not guaranteed to work well with versions of Firefox 3.5 and below and even with LABjs only with jQuery 1.4 (and above).

This is because LABjs contains a patch to add the missing "document.readyState" to those older Firefox browsers, so that when jQuery comes in, it sees the property with the correct value and dom-ready works as expected.

If you don’t want to use LABjs you could implement a similar patch like so:

  <script type="text/javascript">
  (function(d,r,c,addEvent,domLoaded,handler){
     if (d[r] == null && d[addEvent]){
        d[r] = "loading";
        d[addEvent](domLoaded,handler = function(){
        d.removeEventListener(domLoaded,handler,false);
        d[r] = c;
     },false);
   }
})(window.document,"readyState","complete","addEventListener","DOMContentLoaded");
</script>

(Adapted from a hack suggested by Andrea Giammarchi.)

So far we have talked about CDN failure but what happens if the CDN is simply taking a long time to respond? The page-load will be delayed for that whole time before proceeding. To avoid this situation some sort of time based solution is required.

Using LABjs, you could construct a (somewhat more elaborate) solution that would also deal with the timeout issue:

  <script type="text/javascript" src="LAB.js"></script>

  <script type="text/javascript">
  function test_jQuery() { jQuery(""); }
  function success_jQuery() { alert("jQuery is loaded!"); 

  var successfully_loaded = false;
  function loadOrFallback(scripts,idx) {
     function testAndFallback() {
        clearTimeout(fallback_timeout);
        if (successfully_loaded) return; // already loaded successfully, so just bail
        try {
           scripts[idx].test();
           successfully_loaded = true; // won't execute if the previous "test" fails
           scripts[idx].success();
        } catch(err) {
           if (idx < scripts.length-1) loadOrFallback(scripts,idx+1);
        }
     }
 
     if (idx == null) idx = 0;
     $LAB.script(scripts[idx].src).wait(testAndFallback);
     var fallback_timeout = setTimeout(testAndFallback,10*1000); // only wait 10 seconds
  }
  loadOrFallback([
     {src:"http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js", test:test_jQuery, success:success_jQuery),
     {src:"/local/jquery-1.4.min.js", test:test_jQuery, success:success_jQuery}
  ]);
  </script>

To sum up – creating a truly robust failover solution is not simple. We need to consider browser incompatabilities, document states and events, dependencies, deferred loading and time-outs! Thankfully tools like LABjs exist to help us ease the pain by giving us complete control over the loading of our JavaScript files. Having said that, you may find that in many cases the <script> only solutions may be good enough for your needs.

Either way my hope is that these methods and techniques provide you with the means to implement a robust and efficient failover mechanism in very few bytes.

A big thanks to Kyle Simpson, John Resig, Phil Dokas, Aaoran Saray and Andrea Giammarchi for the inspiration and information for this post.

Mark B

Related articles / resources:

1.http://LABjs.com

2.http://groups.google.com/group/jquery-dev/browse_thread/thread/87a675bd840ff070/ac60006335f9e23a?hl=en

3.http://aaronsaray.com/blog/2009/11/30/auto-failover-for-cdn-based-javascript/

4.http://snipt.net/pdokas/load-jquery-even-if-the-google-cdn-is-down/

5.http://stackoverflow.com/questions/1447184/microsoft-cdn-for-jquery-or-google-cdn

6.http://webreflection.blogspot.com/2009/11/195-chars-to-help-lazy-loading.html

7.http://zoompf.com/blog/2010/01/should-you-use-javascript-library-cdns/

Tags: , , ,

Thursday, January 28th, 2010 AJAX, configuration, development, javascript, jQuery 10 Comments