Local Government Archive

May 19, 2006

My top 3 public sector sites

@ 9:37 PM

William Heath from the Ideal Government Project (external link) turned the tables on me a bit after my rant about the new DTI website, and asked:

What are your three top-rated public-service web sites? And is it just accessibility you focus on, or content and effectiveness?

To answer the second question first, I don't just focus on accessibility, but I do believe that it a high degree of accessibility is a fundamental characteristic of any quality website. It's a good general indicator - to be truly accessible a site must have a number of other inherent qualities, including valid, semantic mark-up, good information architecture and a usable interface.

I also like to see apparently minor features like clean, technology-neutral URLs, good 404 pages, robust error recovery, user-friendly functions like RSS feeds and mailing lists, and effective search. It's this attention to detail that separates a great site from an average site for me.

My three top-rated public service sites? It's a great question, and the answer is naturally very subjective. All of these sites have issues (as does every site I've ever worked on I hasten to add), but I'll go for:

  1. Royal Borough of Kensington and Chelsea (external link)
  2. London Borough of Lambeth (external link)
  3. Lincolnshire County Council (external link)

Try as I might I couldn't come up with an exemplar from central government. I did consider the National Crime Squad (external link) since it's been standards-based for a long time now, but it's a defunct body and the site won't be there much longer.

So what are your top three public sector sites? What gems have I missed? They needn't be UK or english-language sites - it would be really interesting to hear of some top notch sites from elsewhere.

March 30, 2006

Seduced by automated testing?

@ 6:39 PM

There's a wee bit of this year's Better Connected that escaped my attention on first reading, but happily an item in Headstar's very worthy E-access Bulletin (external link) this week led me back to the report. It concerns the disparity between WAI conformance claims on councils' websites and their real level of comformance.

Of the 296 sites in the transactional (T) and content plus (C+) categories which claimed a particular level of conformance, only 69 were found to achieve that level in reality, or just 23%. There are only really two explanations for such an alarming disparity - either the councils in question are deliberately over-stating their conformance level, or, more likely in my opinion, they are being led to believe that their sites are achieving a higher conformance level than they really are.

Better Connected suggests that the culprit might be automated testing:

There is no doubt that achieving Level A is hard work and that measuring it is a complex business. Many might also be lulled into thinking that passing the automated tests of Level A (and Level AA and AAA) means that you have achieved comformance at those levels.

If this is true it's fair to say that the use of automated testing is effectively damaging the accessibility of the sites in question, rather than improving it. Given the gravity afforded to the SiteMorse league tables in some quarters, it's easy to understand why councils might be seduced into developing and measuring their sites using the company's tool alone. But as has been said before (external link), the number of WAI guidelines that can be reliably tested with automated software is very small indeed, and the only way to really know if your site is accessible is to have people use it, preferably disabled people using a range of assistive technologies.

An analogy I like to use with non-technical, non-web managers is that of a car's MOT Test (for non-UK readers the MOT Test is a comprehensive safety test that cars have to pass every year). A full accessibility audit is like an MOT Test - it delves into aspects of your site's performance and accessibility that you can't reach yourself, and that you really aren't qualified to judge. An automated test on the other hand is like emailing a photograph of your car, with a note of the make, model and year, to a bloke who knows a bit about cars, and having him judge on that evidence alone if your car is road-worthy and safe to travel in. Which would you rather your passengers travelled in?

PS: I know this is a familiar refrain, and I know that I bang on about it all the time, but I've been convinced of the value of repetition (external link) by Jeff Atwood.

February 27, 2006

Between the Devil and the Deep Blue Sea

@ 8:02 PM

[This piece was written for Public Sector Forums (external link) and is cross-posted here to allow comments from those who don't have access to that site.]

Or SOCITM and SiteMorse vs. the ODPM and Site Confidence...

There's a pithy saying, much loved by researchers and statisticians, that goes something like this:

Be sure to measure what you value, because you will surely come to value what you measure.

Sage advice, which you would be wise to follow whatever business you're in. In reality, a greater danger comes from the likelihood that others will come to value what you measure, or worse still, what others measure about you.

We're all familiar with the arguments for and against automated web testing. It should form an integral part of any web team's quality assurance policy, and can save enormous amounts of time pinpointing problems buried deep in your site. By itself an automated testing tool can be a valuable aid in improving the quality of your website. But when automated tests are used to compare websites the problems start to come thick and fast. The recent disparity between the 'performance' tests from SiteMorse and Site Confidence are a case in point.

Who can you trust? SiteMorse will tell you that their tests are a valid measure of a site's performance. Site Confidence will tell you the same. Yet as previously reported on PSF the results from each vary wildly. SOCITM have offered this explanation for the variation:

"The reality is that both the SiteMorse and Site Confidence products test download speed in different ways and to a different depth. Neither is right or wrong, just different."

And therein lies the real problem. If both are valid tests of site performance then neither is of any value without knowing precisely what is being tested, and how those tests are being conducted. The difficulty is that no-one is in a position to make a judgement about the validity of the tests, because no-one outside of the two companies knows the detail.

It's worryingly easy to pick holes in automated tests. Site Confidence publishes a 'UK 100' benchmark table (external link) on its website, and at the time of writing it has Next On-Line Shopping (external link) sitting proudly at number 1, with an average download speed of 3.30 sec for a page weighing 15.33kb. The problem is that the Next homepage is actually over 56kb. At number 5 is Thomas Cook (external link), with a reported page size of 24.92kb, when in fact it's actually a whopping 172kb. Where does the problem lie in this case? Are the sites serving something different to the Site Confidence tool? Is the tool missing some elements, perhaps those referenced within style sheets, or those from different domains? The real problem is that we can't tell from the information provided, and the same holds true for SiteMorse league tables.

A few associates and I have been in correspondence with SOCITM for some months now about the use of automated tests for Better Connected. To date the responses from SOCITM have not completely alleviated our concerns. While some issues have been addressed by SiteMorse, many remain unanswered, and perhaps the greater concern is the attitude of SOCITM. For example, when pressed on why SOCITM hadn't sought a third party view of SiteMorse's testing methods, the response was:

You wonder why we have not done an independent audit of the SM tests. To date when detailed points have been raised, SM has found the reason and a satisfactory explanation, almost always some misunderstanding of the standard, or some problem caused by the CMS or by the ISP. In other words, there has been little point in mounting what would be an expensive exercise. You may, of course, not be satisfied with the explanations in the attached document to this set of detailed points.

I'll leave you to draw your own conclusions from that response, other than to say that I wasn't the slightest bit comforted by it.

Our concerns extend beyond Better Connected to the publication of web league tables in general. The fact is that we know very little about how SiteMorse conduct their tests, or what they are actually measuring. In some cases SiteMorse, or any testing company, will have to assert their interpretation of guidelines and recommendations to test against them, and have to make assumptions about what effect a particular problem might have on a user. For example SiteMorse will report an error against WCAG guideline 1.1 if the alt attribute of an image contains a filename, despite there being legitimate circumstances where such an alt attribute might be required. The fact is there are only two WCAG guidelines which can be wholly tested by automated tools (external link).

While SOCITM make no use of the accessibility tests from SiteMorse, there are similar concerns about performance tests based on no recognised standard, or which have no impact on users. For example SiteMorse raises a warning for title elements with a length of more than 128 characters, citing the 1992 W3C Style Guide for Online Hypertext (external link) as the source of the guidance. This guide is at best a good read for those with an interest in the history of the web, but for SiteMorse to use it as the basis for testing sites over a decade later is highly questionable. To quote from the first paragraph of the guide:

It has not been updated to discuss recent developments in HTML., and is out of date in many places, except for the addition of a few new pages, with given dates.

SiteMorse justifies the use of this test in league tables by saying that many browsers truncate the title in the title bar. But this ignores the fact that the title element is used for more than just title bar presentation (for example for search engine indexing), and that the truncation can depend on the size of the browser window (at 800x600 on my PC, using Firefox, the title is truncated at 101 characters, for example). While it may be useful as a warning to a web developer, who can then review the title for the use of the clearest possible language, it certainly should not be used as an indicator in the compilation of league tables.

From our correspondence with SOCITM it became clear very quickly that SOCITM don't know much about how SiteMorse tests either - as evidenced above there has been blind acceptance of the explanations given by the company and no independent expert view sought.

In most other arenas league tables are based on clear and transparent criteria. Football, exam results, olympic medals - all rely on known, verifiable facts. Unfortunately the same cannot be said of the current LA site league tables.

Our main assertion is that SOCITM should be working with local authorities and UK e-standards bodies (if there are any left) to produce a specification for the testing of websites using meaningful, independently assessed measures which are based on consensus, rather than blindly accepting the existing, opaque tests offered by SiteMorse, Site Confidence or any other private concern. There needs to be public discussion about precisely what we should be measuring, how those measures are conducted and what conclusions it would be valid to draw from the results.

In the end it all comes down to a question of credibility - for Better Connected, SOCITM, the testing companies, and most importantly those of us who are responsible for local authority websites. It's likely that league tables are here to stay, but unless we are prepared to question the numbers behind the tables, and the way those numbers are produced, we're probably getting what we deserve.

October 29, 2005

Searching Local Government

@ 9:19 PM

The development of a website is almost always an iterative process. Once the core functionality is in place improvements tend to be incremental, either by extending the range of functions, or by improving existing functions. For example, last week I installed the latest version of mnoGoSearch (External link), the search engine software I use for ClacksWeb. For tasks like this I always try to programme in time to have a look at other UK local authorities to see what they're up to in the same area. It helps me to get ideas for future developments, and often ideas I can implement at the same time as fulfilling the task in hand.

In this instance I reviewed the search functions of the websites of Clackmannanshire Council and 18 other UK local authorities, looking for examples of good practice and novel ideas which might improve user experience. In this first piece I'll present some of the findings, concentrating on two aspects of search that impact upon the user:

  1. The search results data itself - how relevant it is (does it answer the user's query?), how comprehensive (does it include results from files in formats other than HTML?), what visible metadata it includes (does it provide file size, date last modified, file type, etc?).
  2. The presentation of the results - the validity of HTML, the structure and accessibility of the results, what help was provided for users, and so on.

In two future posts I'll cover some of the interesting and novel features I found, and some tips for maintaining and developing a site search function.

The method

For want of a better method I took the top ten sites from the latest (seriously flawed, but that's another matter) UK local government website rankings, plus the sites ranked 50, 100, 150 and so on, up to site 450. A full list of the sites reviewed is provided at the end of this article.

I searched each site using the search function provided on the front page, except in the one case where search was not available on the front page. Since local authorities serve different functions I needed to use neutral search queries. The two I used were:

  1. make a complaint - I wanted information on making a general complaint to the Council;
  2. accessibility - I was interested in the Council's web accessibility policy and provision.

I recorded a range of information about the search results, including the product or package used (where known).

Findings

In terms of finding what I was looking for it was an encouraging experience, at least for me as an able-bodied, sighted user. I rated nine of the sites as providing 'good' results, and only three as 'poor'. On all except one of the sites I found the required information for query one. Query two was more problematic, with many superfluous results.

Here's a quick summary of some basic indicators:

Presentation of results

One of the more disappointing aspects was the quality of the mark-up used for the results themselves. In only two cases were results provided with any explicit relationship between the result title and the metadata. In the first case the title was presented as a level 2 heading, with the metadata (the first x characters of the page in question) a paragraph beneath. In the second case the results were presented as a definition list, with the title as definition term and metadata as definition data.

All the other sites used either various table-based layouts, none with table header cells or other assistive mark-up, or simply a paragraph per result. I'm sure none of these would have made for a comfortable experience for a screenreader user - the lack of structure between and within results being a real barrier to accessibility.

Metadata

It's good practice to provide users with as much information about the destination you're sending them to with any hyperlink, and in my opinion essential with search results. When users follow links from within a content page of a site, they will be able to take some context from the other information on the page. With search results they don't have that context to inform their judgement, and so must rely on the information the search results provide.

Ideally I'd want to know the type of file I'm heading to (is it HTML, a PDF, a Microsoft Word document, etc), the size of that file, and when it was last updated. Here's what I found:

This lack of metadata surprised me. It's hard to understand why an organisation would consider purchasing or adopting a search package without support for these functions.

Search engines

It was possible to identify the search engine used by fourteen of the sites:

In my unscientific tests the dedicated search packages did seem to produce more relevant results than the CMS searches, but did not necessarily present them in a better fashion. In reality this is far too small a sample to draw any valid conclusions about the value of individual or groups of products.

Conclusions

Search is a critical function on a local government site - the search engine results page (SERP) will without exception feature in the top ten most visited pages. Even ClacksWeb, catering for the smallest Council in Scotland, processes more than 10,000 queries in an average month. Given that it would be reasonable to expect it to be a lovingly crafted, finely-honed page, with relevance of results, validity of mark-up and accessibility all prime considerations. Clearly this isn't the case, with many of the sites reviewed failing to provide what could be described as a high quality site search.

Although I found what I was looking for on most sites, I was largely disappointed with the technical quality of the SERPs. I did get some good ideas for enhancing our search function, which will be implemented in the near future, but I also picked up a number of examples of how not to approach search. I'll post more about both at a later date, plus some tips for creating and maintaining a top-notch site search facility.

Appendix - the review sites