Bad Browsers Lead to Bad Practices
Then came the 4.0 browsers. Although still buggy and incomplete, IE4 greatly improved on IE3’s CSS support. Netscape 4 offered CSS for the first time in a last-minute implementation so broken and foul it set adoption of CSS back by at least two years.
To be fair, Netscape 4’s CSS support was far better than IE3’s had been. But while almost nobody today uses IE3, millions still use Netscape 4. Thus, many site owners feel compelled to support Netscape 4—and mistake “support for Netscape 4,” which is a good thing, with “pixel-perfect sameness and identical behavior in Netscape 4,” which is a bad thing because it ties developers’ hands and forces them to write bad code and dumb markup.
The Curse of Legacy Rendering
Among Netscape 4’s chief CSS failings were legacy renderings and lack of inheritance.
Designed to abstract presentation from structure, CSS makes no assumptions about how elements are supposed to be displayed or even what markup language you’re going to use, although browsers and other user agents typically do make those assumptions. (Some modern browsers use CSS to enforce their assumptions and allow the designer’s style sheets to override them.) By default in most browsers, without CSS, the <h1> heading would be big and bold, with vertical margins (whitespace) above and below.
CSS lets you change that. With CSS, <h1> can be small, italic, and margin-free if it suits the designer to make it so. Alas, not in Netscape 4, which adds its default legacy renderings to any CSS rule the designer specifies. If the CSS says there shall be no whitespace below the headline, Netscape 4 will go ahead and stick whitespace down there anyway.
When designers applied CSS to standard HTML markup, they quickly discovered that IE4 mainly did what they asked it to do, while Netscape 4 made a mess of their layouts.
Some designers abandoned CSS. Others (sadly including me) initially worked around the problem by eschewing structural markup, using constructions like <div class=“headline1”> instead of <h1>. This solved the display problem at the expense of document structure and semantics, thereby placing short-term gain ahead of long-term viability and leading to numerous problems down the road. Said problems have now come home to roost.
I’ve long since abandoned the practice of wholesale document structural deformation, but a huge proportion of designers and developers still write junk markup in the name of backward compatibility with Netscape 4. This normative practice is fatally flawed, creating usability problems while stymieing efforts to normalize and rationalize data-driven workflows.
Content management systems, publishing tools, and visual web editors (a.k.a. WYSIWYG editors) developed during the 4.0 browser era are littered with meaningless markup that vastly increases the difficulty and expense of bringing sites into conformance with current standards or preparing legacy content for XML-driven databases. On large sites created by multiple designers and developers, each designer might use different nonstandard tags, making it impossible to gather all the data and reformat it according to a more useful scheme. (Imagine a public library where books were indexed, not by the Dewey Decimal System, but according to the whims of Joe, Mary, and various other cataloguers, each making up their own rules as they went along.)
Outside the realm of graphical browsers, structurally meaningless markup also makes pages less usable. To a Palm Pilot, web phone, or screen reader user, <div class=“headline1”> is plain text, not a headline. Thus, we buy or build content management systems that swap one set of tags for another when a single set of standard tags would serve. Or we force the users of Palm Pilots, web phones, and screen readers to view nonstructural markup and guess at our meanings.
We can thank Netscape 4 (and our own willingness to accommodate its failings) for miring us in this mess. No wonder those Netscape and Mozilla engineers kept working on the four-year-long Mozilla project. They really had no worth-while legacy product to fall back on.
Inherit the Wind
Netscape 4 also failed to understand and support inheritance, the brilliant underlying concept that gives CSS its power. CSS streamlines production and reduces bandwidth by enabling general rules to percolate down the document tree unless the designer specifies otherwise.
For instance, in CSS, you can apply a font face, size, and color to the body selector, and that same face, size, and color will show up in any “child” of the body tag, from <h1> to <p> and beyond—but not in Netscape 4.
Knowledgeable developers worked around the browser’s lack of support for inheritance by writing redundant rules:
body, td, h1, p {font-family: verdana, arial, helvetica, sans-serif;}
In the preceding example, the td, h1, and p selectors are redundant because any compliant browser automatically styles those “child” elements the same way as the “parent,” body element.
Slightly less knowledgeable developers spelled out their rules in full, thus creating even more redundancy while wasting even more bandwidth:
body {font-family: verdana, arial, helvetica, sans-serif;} td {font-family: verdana, arial, helvetica, sans-serif;} h1 {font-family: verdana, arial, helvetica, sans-serif;} p {font-family: verdana, arial, helvetica, sans-serif;}
… and so on. It was a waste of user and server bandwidth, but it got the job done. Other developers concluded that CSS didn’t work in Netscape 4 (they had a point) or that CSS was flawed (they were wrong, but the perception became widespread).
Netscape 4 had other CSS failings—enough to fill the Yellow Pages of a major metropolis—but these are enough to paint the picture, and they were also enough to delay widespread adoption of the CSS standard.
Miss Behavior to You
Along with CSS snafus, early browsers could not agree on a common way to facilitate sophisticated behavior via scripting. Every scriptable browser has an object model stating what kinds of behaviors can be applied to objects on the page. Netscape 4 sported a proprietary document.layers model. IE4 countered with its own proprietary document.all model. Neither browser supported the W3C DOM, which (to be fair to Netscape and Microsoft) was still being written. Developers who wanted to apply sophisticated (or even basic) behaviors to their sites had to code two ways to cover these two browsers. Supporting earlier browsers (backward compatibility) required more code and more hoop jumping, as described in Chapter 2.
Prior browsers could not even agree on a common scripting language. Early on, Netscape invented JavaScript, promising to release it as a standard so that other browser makers could support it. But for some years, despite their promise, Netscape held onto the secret of JavaScript, viewing it as a competitive advantage. (If Navigator remained the only browser that supported JavaScript, why would anyone develop for a less powerful competitive browser? So Netscape reasoned. In their place, Microsoft would likely have done the same. In fact, Microsoft did the same with their proprietary ActiveX technology.)
To compete with Netscape, Microsoft reverse-engineered JavaScript, changing it along the way as is inevitable in any reverse-engineering project. The resulting language worked like JavaScript, but not exactly like JavaScript—it was just different enough to louse you up. Microsoft called their scripting language JScript. Meanwhile, Microsoft also cooked up a separate technology they called ActiveX, which was supposed to provide seamless functionality in all versions of their IE browser but really only worked correctly on the Windows platform, where it is still used to do things like fill in for missing plug-ins.
JScript, JavaScript, ActiveX: In the name of cross-browser and backward compatibility, developers found themselves dancing with multiple partners, none of whom seemed to be listening to the same tune—and clients paid the piper in the form of ever-escalating development and testing costs.
Standardized Scripting at Long Last
Eventually, Ecma ratified a standard version of JavaScript that they modestly called ECMAScript (www.ecma-international.org/publications/standards/Ecma-262.htm). In time, the W3C issued a standard DOM. Ultimately, Netscape and Microsoft supported both—but not before years of hellish incompatibility had turned many developers into experts at proprietary, incompatible scripting techniques and object models and had persuaded many site owners that web development would always be a Balkanized affair. Hence, the “IE-only” site, the broken detection script, and in some cases the abandonment of web standards in favor of proprietary solutions like Flash.
By the way, Ecma (formerly ECMA, as in European Computer Manufacturers Association) is a bona fide standards body, unlike the W3C, which long labeled its technologies “recommendations” instead of calling them “standards.” Interestingly, following the publication of the first edition of this book, the W3C stopped calling its recommendations “recommendations” and started calling them “standards.” Confusing sites and bewildering labels are another reason standards have had difficulty achieving widespread acceptance on the modern web.