Java Script Abuse

If you have a boring, static web page, there are 2 ways to liven it a bit with some dynamic JavaScript:

Both ways look identical to those who have JavaScript turned on.

It should go without saying that the first way is the RightThingToDo. But far too often, well-intentioned people do it the second way.


(EditHint: Is there some other wiki page here on "Curbing JavaScript dependency"? Or am I just remembering that article from http://herd.plethora.net/~seebs/ops/ibm/?)

You must be thinking of BrowserAbuseSyndrome. Should we apply RefactorByMerging?


Someone claims, "If you run JavaScript, you're highly spammable. Just switch it off, then complain to the webmasters who rely on it for core functionality."


I am astonished to discover that http://nasa.gov/ now has this message to those of us who don't have JavaScript enabled: "The nasa.gov site requires that JavaScripts be enabled in your browser. For instructions, click here". Usually ".gov" sites are better than average at accessibility.


Taking a well-intended feature and warping it, e.g. JavaScript's ability to create popups - occasionally very useful when writing a web-based UI, but hugely irritating when used to cover your screen in porn when you've accidentally typed in the wrong URL.

THE POWER TO DO ANNOYING THINGS DOES NOT MEAN YOU HAVE TO!!!

Absolutely. This is JavaScriptAbuse again.

Reminds me, those web sites that use client side JavaScript to check that the form is correctly filled in, and even precalculate some of the fields, are probably asking for trouble. Big trouble if the server is relying on the checks, else most likely a violation of OnceAndOnlyOnce.

Second that. I've lost count of the number of times I've explained to various different coworkers why a JavaScript validation can be helpful to the user in reducing the number of round trips that are required, but is utterly undependable as a business rule or security check to the server's application.

Client JavaScript and Server code are different packages on different machines, usually in different languages. OnceAndOnlyOnce doesn't apply, though you have the added job of keeping changes synchronized much as you do with ftp and ftpd. The main reason for using JavaScript for client-side validation and contextual morphing is that it makes life better for the user. Making it easier once for one programmer is trumped by making it easier many times for many users. I wouldn't be the first one to point out that good user interfaces are often a pain in the butt to build--anything of quality usually is. -- MarcThibault

My favorite example of the dangers of JavaScript validation without server-side backup has to be the one mentioned in this Daily WTF entry - http://thedailywtf.com/Articles/The_Spider_of_Doom.aspx - in which Google's spider ended up deleting numerous pages of content from a CMS because only JavaScript was used to guard against deletions... --CodyBoisclair


My problems with JavaScript as it's seen on the Web today:

  1. It's used to do things which are useless and/or scary. For example, printing out a message telling the user what browser and OS they are using. It's bad enough if the rest of your webpage design cares in the slightest about that; writing that data back to the user makes it appear as though the FBI is watching or something. Plus, these "script kiddies" never bother to format that data, so it all appears in this one cryptic string. My guess is that this is because they're all employing CopyAndPasteProgramming, and the original version didn't include any formatting because the original author meant it as "just a test".
  1. It's used to do things that used to be covered perfectly well by CGI - or even by HTML itself. I deeply resent being asked to use my processor to do whatever calculation to determine the name of the file to display. You're the one providing the service, you figure out the filename on your CPU. Worse yet, the JavaScript in these cases often takes its data from forms, which look identical to forms that issue a CGI request.

'You are choosing to view the page, so why don't you use your' CPU? This argument makes no sense. Besides, choosing to do it via CGI instead just means you end up using your bandwidth instead of your CPU. Six of one...

  1. Speaking of gathering browser/OS data - it's used to collect aggregate information on a site's users - such as whether or not they support JavaScript. The reality is that god-only-knows what percentage of users out there have JavaScript disabled normally for whatever reason (I do it to avoid popups because my browser is old and doesn't let me disable popups specifically, and also to decrease memory footprint; other people may be able to give other good reasons); all the surveys say everyone can handle it because the surveys are 100% self-selecting - something that no-one working for the website ever seems to realize. So you get everyone continuing to use their stupid JavaScript toys because it's "virtually guaranteed" it'll work. Free hint: CGI works even if the user is running Lynx or Mosaic or something equally ancient.
  1. It's used - I swear I am not making this up - to prevent HTML documents from rendering. Even though JavaScript is not actually later used in any way whatsoever.

Seriously. I have seen stuff in page source like