December 19, 2006

Perhaps They Should Have Tested More - Glasgow's e-Formulary IT System

Because of a software bug, patients received Viagra instead of Zyban - and nobody complained! Go figure!


--------------------------------------------------------------------------------
Computer glitch prescribes Viagra to stop smoking

By Tom Sanders  19 December 2006 11:19AM  General News

Smokers trying to quit report unusual side-effect.

A software bug in Glasgow's e-Formulary IT system has been blamed for replacing prescriptions for the Zyban anti-smoking medication with the erectile dysfunction medicine Viagra.

Doctors who tried to select the smoking pill instead ended up printing prescriptions for sildenafil, the generic name for Viagra. The National Heath Service Greater Glasgow has sent out a warning to the family doctors and surgeries in the area.

No patients have complained about receiving the wrong medications, a spokesperson for the heath authority told The Times.

The glitch has been traced back to an update of the General Practice Administration System for Scotland. The problem lasted about six weeks before it was noticed and will take an estimated four weeks before it is repaired.

The health risks of taking Viagra are limited, as the medication has no serious side-effects.

http://www.pcauthority.com.au/news.aspx?CIaNID=43771

December 16, 2006

Perhaps They Should Have Tested More - Sequoia Voting Systems

An interesting combination of "How Well Did e-Voting Work This Time" and "Perhaps They Should Have Tested More", sent to me by my friend Daphne.



Report blames Denver election woes on flawed software
Todd Weiss


December 13, 2006 (Computerworld) Poor software design, serious IT management inefficiencies and an untested deployment of a critical application were all major factors in last month's Election Day problems in Denver, according to a scathing report from an IT consultant. The problems led to hours-long delays for voters looking to cast ballots and raised questions about the overall efficacy of e-voting.

The 32-page report, released Monday, concluded that the main reason for problems was the electronic poll book (ePollBook) software used by the independent Denver Election Commission (DEC) to oversee voting. The e-poll book software -- an $85,000 custom application created by Oakland, Calif.-based Sequoia Voting Systems Inc. -- included the names, addresses and other information for all registered voters in Denver.

Sequoia was already a voting services vendor to the city and county, and the application was designed to allow poll workers across the Denver area to check off voters as they came in to vote at newly created voting centers. Denver has moved from the old precinct-style polling places to a new "voting center" model where voters can go to any polling place in the area to cast ballots, regardless of where they live. The software was supposed to make it easy for officials at any voting center to check online and make sure a voter had not already voted somewhere else in Denver.
 
Instead, it led to massive problems on Election Day due to "decidedly subprofessional architecture and construction," according to the report from consultants Fred Hessler and Matt Smith at Fujitsu Consulting in Greenwood Village, Colo. Fujitsu was hired by Denver shortly after the election to find out what went wrong and help to fix the problems.
"The ePollBook is a poorly designed and fundamentally flawed application that demonstrates little familiarity with basic tenets of Web development," the report stated. "Due to unnecessary and progressive consumption of system resources, the application's performance will gradually degrade in a limited-use environment and will be immediately and noticeably hampered with a high number of concurrent users."

In other words, the more heavily it was used, the slower it worked.

"Moreover, it appears that this application was never stress-tested by the DEC or Sequoia," other than using it in the spring primary as a test election, the report said. "It is at best naive to deploy enterprise software in an untested state. It is remarkably poor practice to deliberately choose a critical production event (the primary election) to serve as a test cycle."

The Sequoia application was chosen over a tested ePollBook application already in use by Larimer County, Colo., that has been offered to other Colorado counties for free. The consultants recommend that the DEC either get the Sequoia application repaired or take a new look at the Larimer software to see whether it could be used effectively in Denver. The Larimer application uses a server-resident Microsoft Access front-end accessed via Citrix and an Oracle database on a dedicated server, as well as five application servers for access by election officials.

The voting center delays -- with waits in some places of up to three hours -- forced an estimated 20,000 voters to abandon their efforts to vote on Election Day, according to the report.

Other problems with the software include Web sessions that would not expire unless a user clicked a specific "exit" button to close the application, tying up system resources, according to the report. The problem, gleaned from user activity logs generated during the Nov. 7 election, was that 90% of user sessions that day were not ended using the special button but were closed by users who simply shut the browser. That did not free up resources, causing the system slowdowns.

"In media reports following the election, Sequoia defended this flaw by stating that the DEC had not requested that a session-timeout feature be implemented," the consultants wrote. "This is a weak and puzzling defense. In any case, session management is a fundamental responsibility that developers of Web applications are expected to fulfill. Describing session management as a special feature that must be requested by the client is not a reasonable position to adopt."

Also troubling, the consultants said, is that the application and database currently share a server instead of relying on a dedicated database server -- something that would have improved performance, security and redundancy.

A spokeswoman for Sequoia, Michelle M. Shafer, declined to comment directly on the consultant's report in an e-mail response. "While we may disagree with opinions expressed by the author of this report, our focus is on helping Denver solve their problems," she wrote.
In addition to the software problems, the report stated, IT management within the DEC needs to change so that similar situations don't occur again.

The three key flaws within the DEC are "generally substandard information technology operations and management," "dysfunctional communications between the technology function and other leadership," and "a general and pervasive insufficiency of oversight, due diligence, and quality assurance," according to the report.

These issues also led to problems with absentee ballots that couldn't be easily scanned by poll workers and other difficulties with equipment, poll workers and other systems, said the report. "The less-than-rigorous conduct of the ePollBook development project and the ultimate failure of [it] on Election Day, along with ... the absentee ballot scanning problem, should be viewed in a broader context of substandard technology management within the DEC," the report said. "Given the increasing criticality of technology in conducting elections and the sensitivity of personal data in the DEC's possession, this casual approach to technology cannot be permitted to continue."

Alton Dillard, a spokesman for the DEC, said the commission "agrees with 99% of the report" and will take actions to resolve the problems. "The ePollBook was the chokepoint, but there are some other things that need to be addressed," he said.

The DEC meets Dec. 19 to decide how to handle next year's spring primary and off-year fall elections. Three options are under consideration, Dillard said, including the use of mailed ballots for all voters, a return to precinct voting or continuing to use voting centers while fixing or replacing the ePollBook software. Officials want to get everything fixed before the 2008 presidential election, he said.

"Right now, there's no uniformity among the [election] commissioners on which form to accept," Dillard said.

Chris Henderson, the chief operating officer for the city of Denver and a spokesman for Mayor John Hickenlooper, said the consultant's report shows that "clearly the ... technology component of the election commission is pretty broken right now. We are dismayed on a lot of levels about the troubled nature of the implementation of the [ePollBook] software. The challenge is the election commission's business to sort out those questions."

Henderson said he hopes the DEC looks seriously at the consultants' other recommendations, including a call for the DEC to take advantage of the IT staff and resources used by the city and county. "I think, clearly, there's an opportunity for them to benefit from some of the smart people we have working for the city of Denver," he said.

On a related note, John Gaydeski, the executive director of the DEC, resigned from his post last week in response to the problems stemming from the November election.

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9006038&source=NLT_AM&nlid=1

December 13, 2006

Zarro Boogs Found

Those of you who use Bugzilla have no doubt encountered the phrase:


Zarro Boogs found.

Here's the "official" explanation of that phrase from the Bugzilla Glossary at http://www.bugzilla.org/docs/2.22/html/glossary.html.

Zarro Boogs Found

This is just a goofy way of saying that there were no bugs found matching your query. When asked to explain this message, Terry had the following to say:

I've been asked to explain this ... way back when, when Netscape released version 4.0 of its browser, we had a release party. Naturally, there had been a big push to try and fix every known bug before the release. Naturally, that hadn't actually happened. (This is not unique to Netscape or to 4.0; the same thing has happened with every software project I've ever seen.) Anyway, at the release party, T-shirts were handed out that said something like "Netscape 4.0: Zarro Boogs". Just like the software, the T-shirt had no known bugs. Uh-huh.

So, when you query for a list of bugs, and it gets no results, you can think of this as a friendly reminder. Of *course* there are bugs matching your query, they just aren't in the bugsystem yet...

--Terry Weissman

December 11, 2006

Perhaps They Should Have Tested More - NASA


Apparently the satellite control software was off by 45 degrees:
"Anybody that has ever taken algebra has gotten a problem wrong because you slipped a minus sign somewhere"
So NASA isn't able to get their Algebra right? That can't be a good sign.


December 11, 2006


Software glitch spoils inaugural launch from Va. spaceport

By SONJA BARISIC
Associated Press Writer

ATLANTIC, Va. - The inaugural rocket launch from the mid-Atlantic region's commercial spaceport will be postponed until at least Thursday - and possibly until next month - while scientists try to fix a software glitch that forced Monday's scheduled takeoff to be scrubbed.

Teams still were troubleshooting a problem with the flight software for one of the two satellites to be carried by the Minotaur I rocket, so the earliest the launch could be rescheduled would be Thursday, said Keith Koehler, spokesman for NASA's Wallops Flight Facility, where the spaceport's launch pad is located.

"They're looking at the possibility of trying to make the corrections on the launch pad," Koehler said Monday afternoon. If that attempt fails, the satellite will have to be removed from the rocket to be worked on, and that would push the launch date into January, he said.

The original launch window ran through Dec. 20, with the NASA Wallops range closed during the last week of December for the holidays, Koehler said.

Earlier Monday, officials had said the launch would be postponed until at least Wednesday, and possibly for two to three weeks, because Air Force teams discovered an anomaly with the flight software for the TacSat-2 satellite while doing tests Sunday night.

The problem occured in software that controls the pointing of the satellite toward the sun so solar panels can charge batteries, said Neal Peck, TacSat-2 program manager. The software would have tilted the panels at a 45-degree angle instead of having them face directly into the sun, he said.

"So we would not be receiving sufficient power to the spacecraft to power all our systems and to conduct all our experiments," he said during a news conference at NASA Wallops two hours before the rocket was to have taken off at 7 a.m.

Asked what caused the problem, Peck said, "It's basically an error in the software."

"Anybody that has ever taken algebra has gotten a problem wrong because you slipped a minus sign somewhere," Peck said. "My guess is it was something along those lines."

The TacSat-2 satellite will test the military's ability to transmit images of enemy targets to battlefield commanders in minutes - a process that now can take hours or days. The Air Force envisions a system that would allow commanders to send questions directly to a satellite overhead and receive answers before the satellite passes back over the horizon.

Also aboard the rocket is the NASA's shoebox-size GeneSat-1 satellite, which carries a harmless strain of E. coli bacteria as part of an experiment to study the long-term effects of space on living organisms. The results could be useful for NASA's mission to Mars.

The Mid-Atlantic Regional Spaceport, or MARS, is one of only six federally licensed launch centers in the country. The Air Force will pay the spaceport $621,00 for the launch, spaceport director Billie Reed said Sunday.

Reed did not immediately return a telephone call seeking comment Monday.

The Virginia Commercial Space Flight Authority, a state agency created in 1995, built the launch pad in 1998 on land leased from NASA on Wallops Island on Virginia's Eastern Shore peninsula. Maryland later joined the commercial venture.

Orbital Sciences Corp. of Dulles built the rocket with two stages made from decommissioned Minuteman intercontinental ballistic missiles and two stages from Pegasus rockets.

http://www.jacksonville.com/apnews/stories/121106/D8LUSHC81.shtml

--------------------------------------------------------------------------------
Updated: December 16, 2006

The 69-foot Minotaur I rocket soared from the launch pad at 7 a.m. ET, after teams spent the week resolving a glitch in software for one of the satellites that had scrubbed a liftoff on Monday.

The delay added "a couple hundred thousand dollars" to the $60 million price of the mission, Air Force Col. Scott McCraw, the mission director, said Friday. Included in the total is the cost of the rocket and the two satellites and $621,000 the Air Force will pay the spaceport.

http://www.usatoday.com/tech/science/space/2006-12-16-commercialrocket_x.htm?POE=TECISVA

December 6, 2006

Software Testing is NOT "Breaking Things"

For some odd reason, I really don't like it when software testers say "I enjoy breaking things".


copyright © 2005 by Martin Hoffmann and Fred Mellender

When you test and find a bug, you haven't broken anything - it was already broken!  If anything, the developer who wrote the code broke it.

And now that you have found a breakage, your job has just begun.  You need to dig in much further:
  • Under what conditions does this break occur?  Under what conditions does it not occur?
  • What steps are required to reproduce this break?  And can you express those steps in simple terms so that developers and other testers can see it for themselves?
  • Can you gather related symptoms, logs, images, etc - to help make fixing this break simpler?
  • How long might it take to test a fix for this break?
  • Is this break indicative of a more general problem?  How will we know?
  • Does the presence of this break, or a fix for this break, mean we should re-execute some of our tests?  If so, which ones?
  • What risks does this break expose?
  • When did this break get introduced?
  • Was it possible to find this break sooner?  If so, why didn't we already find it?
  • Should we modify our testing processes to find breaks like this more effectively?
If you enjoy breaking things, perhaps demolition is a good profession for you.

But if you enjoy planning, conducting, and analyzing the results from controlled experiments designed to find existing (or potential) breakages, then software testing might be right for you.

December 3, 2006

One Answer to the Question About the Ratio of Testers to Developers

Often I hear questions like "What is the best ratio of Testers to Developers?" or "What is the industry standard ratio of Testers to Developers?"

As I have mentioned before, those questions really have no answer. The appropriate ratio depends totally on context - the industry, the company, the software, the projects, the budget, the role of the testers, etc, etc.

But, for those who really crave a ratio, and don't care about context, the current issue (December 2006) of Better Software Magazine provides an answer.

Hundreds of their readers answered a survey about their employment situation.

In the results, they present several charts - one of which is the "Ratio of Testers to Developers".

While precise numbers are not given, their chart appears to show the following:
  • about 5% report a 1:1 ratio
  • about 45% report from a 1:2 to a 1:4 ratio
  • just over 30% report from a 1:5 to a 1:10 ratio
  • about 10% report a ratio of 1:10 or more
  • just a few percent report 2:1, 3:1, 4:1 or 5:1 ratios
You should consider signing up for a free subscription at http://www.stickyminds.com/BetterSoftware/magazine.asp.  Good stuff free!

November 26, 2006

Perhaps They Should Have Tested More - Moneris Solutions Corp.

Annually, the holiday season brings us shopping, visits from far-flung relatives, overeating, and reports of software failure.


I laughed when I read the response of the Senior VP of Marketing - "we would like to reassure them that we have identified the problem as a software problem". That's supposed to be reassuring?

Fortunately their system is "now up and running in a highly reliable fashion" - presumably as opposed to the largely unreliable fashion prior to this timely failure.

Debit and credit blackout

Times Colonist; CanWest News Service; Canadian Press

Saturday, November 25, 2006

Many shoppers on the Island and across the country couldn't pay for their purchases yesterday afternoon after a debit and credit card system failed.

A software glitch at payment processor Moneris Solutions Corp. was blamed.

Card holders were left scrambling to find ways to pay for their purchases.

"Some people were definitely inconvenienced," said Alex Mutrie, a clerk at the Petro Canada station on Douglas Street.

"They pumped their gas, tried to use their debit and had no way to pay," said Mutrie. "We took their ID, something they'll come back for. A couple of people were really angry."

The outage began around 1 p.m. Pacific time and lasted until about 3:45 p.m., said Royal Bank spokeswoman Beja Rodeck.

Moneris is jointly owned by the Royal Bank and Bank of Montreal, but the problems appeared to affect whatever credit or debit card was used in a Moneris point-of-sale terminal.

"This is a highly unusual incident. Our system has been running without incident for years," said Brian Green, Moneris senior vice-president of marketing.

"It kind of screws up your whole day," said Brianna Cameron, who was walking around Mayfair Shopping Centre with a friend.

"The stores told us we couldn't use our debit. So we went to get a drink at the food court and their debit wasn't working either," said Thais Robson, a Grade 9 student at Reynolds Secondary School.
--------------------------------------------------------------------------------

Christmas shopping glitch was in the cards

Tough time for Royal Bank

TORONTO -- A software glitch at Moneris Solutions Corp. prevented some merchants across the country from completing credit and debit transactions for about 2 1/2 hours yesterday until the problem was fixed.

Brian Green, senior vice-president of marketing for Moneris, said the system went down about 4 p.m. Eastern time and was fixed by about 6:30 p.m. ET.

The problem was traced to a software application.

"We were able to isolate that software and essentially pull it out and thereby restore service fully," Green said. "This is a highly unusual incident. Our system has been running without incident for years."

Moneris is Canada's largest processor of debit and credit card transactions. It processes more than 2.3 billion payment transactions a year. Green said the problems cropped up across the country.

Moneris is jointly owned by Royal Bank of Canada (TSX:RY) and Bank of Montreal (TSX:BMO) but the problems yesterday affected whatever credit card or bank card was used in a Moneris point-of-sale terminal.

"We deeply regret the inconvenience and frustration that we caused our customers and their customers," Green said.

"However, we would like to reassure them that we have identified the problem as a software problem, certainly not a capacity or volume problem, and our system is now up and running in a highly reliable fashion."

--------------------------------------------------------------------------------
Moneris Restores Service After ‘Glitch’ Cuts off POS Traffic in Canada

(November 27, 2006) While American consumers were flocking to the stores on Friday and whipping out their credit and debit cards for payment, their Canadian counterparts were forced to find cash, their checkbooks, or to come back later because the network of the nation’s largest merchant acquirer went down for two and a half hours.

Moneris Solutions Corp. reports that a software problem in its main processing switch that began about 4 p.m. Eastern time left its merchants unable to process any credit or debit card transactions until about 6:30 p.m. A spokesperson for Toronto-based Moneris, which has 300,000 merchant locations, did not have details Monday morning about the technical nature of the problem. The problem, however, did not arise from heavy volume or insufficient capacity, Moneris reports. Nor does there appear to be evidence of outside tampering. “All indications point to an internal glitch,” the spokesperson says.

In a news release late Friday, Moneris said that when it became aware of the problem it immediately started a diagnostic and restoration process and concurrently set in motion a process to move to its back-up system. The restoration process was successful and the back-up system conversion was not implemented. During the outage, calls flooded into Moneris’s customer-service center, creating a backlog that caused some merchants to receive a busy signal.

Besides Visa and MasterCard credit card sales, the glitch affected Interac PIN-based point-of-sale transactions and American Express Co. transactions in Canada, the spokesperson says. The problem did not affect Moneris’s U.S. affiliate, Moneris Solutions Inc., which is based in suburban Chicago.

The spokesperson says that until Friday’s incident, Moneris’s system had operated virtually flawlessly for years. Network uptime exceeds 99.9%, according to the release. “It was a minor headache, and certainly frustrating for the merchants and customers,” he says.

Moneris is a joint venture of RBC Financial Group and BMO Financial Group, parent companies of the Royal Bank of Canada and Bank of Montreal, respectively. It processes more than 2.3 billion transactions annually.
--------------------------------------------------------------------------------

Interac glitch slows holiday sales
Peter Rusland

By Peter Rusland

News Leader
Nov 29 2006

Local fallout from Friday’s nationwide computer software glitch is still being tallied by hundreds of Cowichan shoppers whose debit and credit cards were refused for use in shops throughout the Valley.

“It was hit and miss; some cards worked and others didn’t. It was a hodgepodge mess and it’s happening again today,” said Bruce’s Grocery manager Loren Halloran.

Staffer Jason Battie noted customers were able to get cash from the store’s ATM machine while regular shoppers charged groceries to their Bruces’ account.

“They seemed to be taking it fine considering the situation.”

Duncan Safeway’s first assistant manager Darren Bognar said his store’s customers were also understanding during Friday’s 1 to 3:30 p.m. downage that affected businesses using payment processors from Toronto-based Moneris Solutions Corp.

“The customers were really good about the inconvenience,” said Bognar who wasn’t on duty Friday.

“I’m sure it was cash only and some people got really good deals for waiting,” he said. “We tried to take care of our customers.

“We haven’t had any problems with our system for that long a period before outside of individual machines that had nothing to do with Moneris.”

Moneris spokesman Matthew Cramm says the Canada-wide blackout believed caused by wonky software is still being probed by the firm owned by the Royal Bank and the Bank of Montreal.

“It caused the network to go down so they took that software off line and restarted the network,” he told the News Leader.

“As far as I know the new network has worked. It was as if your Internet crashed.”

Cramm calls the problem “extremely rare.”

“Moneris’ network has been running incident-free for years. It was software and had nothing to do with (purchasing) volume.

“Their network is designed to handle even more capacity and was built to grow over time. It was just one of those things but it was frustrating for merchants and customers.”

Most bank machines were unaffected.

Canada’s half-dozen other payment processors for bank debit cards, plus Visa, MasterCard and American Express charge cards were unaffected, he notes.

Customers should contact their local bank to discuss problems or call Moneris’ merchant line at 1-866-319-7450.

November 21, 2006

HTTP Response Codes


1xx Codes


Informational


100 - Continue


An interim response telling the browser the initial part of its request has been received and not rejected by the server. A final response code should be sent when the remainder of the material has been sent.


101 - Switching Protocols


The browser may wish to change protocols it's using. If such a request is sent and approved by the server this response is given.


2xx Codes


Success


200 - OK


The request was successful and information was returned. This is, by far, the most common code returned on the web.


201 - Created


If a POST command is issued by a browser (usually in processing a form) then the 201 code is returned if the resource requested to be created was actually created. If there is a delay in creating the resource the response should be 202, but may be 201 and contain a description of when it will be created.


202 - Accepted


If a request for processing was sent and accepted but not acted upon and the delay in acting is unknown, then this code should be sent instead of 201. Note that 202 does not commit to processing the request; it only says the request was accepted. A pointer to some status monitor for the task is often included with this response so users can check back later.


203 - Non-Authoritative Information


Usually the preliminary information sent from a server to a browser comes directly from the server. If it does not, then this code might also be sent to indicate that information did not come from a known source.


204 - No New Content


The request was accepted and filled but no new information is being sent back. The browser receiving this response should not change its screen display (although new, and changed, private header information may be sent).


205 - Reset Content


When you fill in a form and send the data, the server may send this code telling the browser that the data was received and the action carried out so the browser should now clear the form (or reset the display in some manner).


206 - Partial Content


This code indicates the server has only filled part of a specific type of request.


3xx


Redirection


300 - Multiple Choice


You should not see 300 standing alone; it serves as a template for the following specific codes.


301 - Moved Permanently


As the name implies, the addressed resource has moved and all future requests for that resource should be made to a new URL. Sometimes there is an automatic transfer to the new location.


302 - Moved Temporarily


The addresses resource has moved, but future requests should continue to come to the original URL. Sometimes there is an automatic transfer to the new location.


303 - See Other


The response to your browser's request can be found elsewhere. Automatic redirection may take place to the new location.


304 - Not Modified


In order to save bandwidth your browser may make a conditional request for resources. The conditional request contains an "If-Modified-Since" field and if the resource has not changed since that date the server will simply return the 304 code and the browser will use its cached copy of the resource.


305 - Use Proxy


This is notice that a specific proxy server must be used to access the resource. The URL of the proxy should be provided.


4xx


Error - Client Side


400 - Bad Request


The server did not understand the request. This is usually cured by resending the request.


401 - Unauthorized


The request requires some form of authentication (e.g., userid and/or password) but did not contain it. Usually, this code results in a box popping up in your browser asking you for the required information. Once you supply it the request is sent again.


402 - Payment Required


Reserved for future use. [Who says the web is not moving toward being a commercial medium!]


403 - Forbidden


This is a sort of catch-all refusal. If the server understood the request but, for whatever reason, refuses to fill it, a code 403 will often be returned. The server may or may not explain why it is sending a 403 response and there is not much you can do about it.


404 - Not Found


If you happen to mistype a URL or enter an old one that no longer exists this is the error you will likely see. The condition may be temporary or permanent but this information is rarely provided. Sometimes code 403 is sent in place of 404.


405 - Method Not Allowed


Your browser has requested a resource using a procedure not allowed to obtain that resource. The response should contain allowed procedures.


406 - Not Acceptable


Your browser said only certain response types will be accepted and the server says the content requested does not fit those response types. (This is one way content monitoring can be implemented.)


407 - Proxy Authentication Required


This code is similar to 401, except that the browser must first authenticate itself.


408 - Request Timeout


Your browser waited too long and the server timed out. A new request must be sent.


409 - Conflict


If a site allows users to change resources and two users attempt to change the same resource there is a conflict. In this, and other such situations, the server may return the 409 code and should also return information necessary to help the user (or browser) resolve the conflict.


410 - Gone


Code 410 is more specific than 404 when a resource can't be found. If the server knows, for a fact, that the resource is no longer available and no forwarding address is known, then 410 should be returned. If the server does not have specific information about the resource, then 404 is returned.


411 - Length Required


For some processes a server needs to know exactly how long the content is. If the browser does not supply the proper length code 411 may result.


412 - Precondition Failed


A browser can put conditions on a request. If the server evaluates those conditions and comes up with a false answer, the 412 code may be returned.


413 - Request Entity Too Large


If your browser makes a request that is longer than the server can process code 413 may be returned. Additionally, the server may even close the connection to prevent the request from being resubmitted (this does not mean a phone connection will hang up; just that the browser's link to the site may be terminated and have to be started over again).


414 - Request-URI Too Long


You will likely not see this one as it is rare. But, if the resource address you've sent to the browser is too long this code will result. One of the reasons this code exists is to give the server a response when the server is under attack by someone trying to exploit fixed-length buffers by causing them to overflow.


415 - Unsupported Media Type


If your browser makes a request using the wrong format, this code may result.


5xx


Error - Server Side


500 - Internal Server Error


An unexpected condition prevented the server from filling the request.


501 - Not Implemented


The server is not designed (or does not have the software) to fill the request.


502 - Bad Gateway


When a server acts as a go-between it may receive an invalid request. This code is returned when that happens.


503 - Service Unavailable


This code is returned when the server cannot respond due to temporary overloading or maintenance. Some users, for example, have limited accounts which can only handle so many requests per day or bytes send per period of time. When the limits are exceeded a 503 code may be returned


504 - Gateway Timeout


A gateway or proxy server timed out without responding.


505 - HTTP Version Not Supported


The browser has requested a specific transfer protocol version that is not supported by the server. The server should return what protocols are supported.

November 4, 2006

Perhaps They Should Have Tested More - Excelsior Software

Teachers' Input of Grades Crashes System

By Daniel de Vise
Washington Post Staff Writer
Saturday, November 4, 2006; B03


There are probably some Montgomery County students who would prefer that their first-quarter grades never saw the light of day. For a few hours this week, it almost appeared that their prayers would be answered.

A new computerized grading system in 52 middle and high schools seized up Wednesday, overwhelmed as thousands of teachers simultaneously typed in final grades for the marking period. It was the first real test of a new electronic grade book that frees teachers from the tedium of marking grades in ovals with No. 2 pencils and feeding them into Scantron machines.

Officials eventually shut down the system and fixed a glitch that had caused the networking equivalent of a rush-hour pileup on the Beltway.

At a union meeting Wednesday night, frustrated teachers logged what might be the first-ever no-confidence vote in an educational software program.

"They had spent hours in front of their computers, trying to enter their data, and it wouldn't go through," said Tom Israel, executive director of the Montgomery County Education Association, which represents teachers.

The Pinnacle electronic grade book, piloted in four schools last year, is scheduled for countywide use in secondary schools next year. A timesaver for teachers, it also offers parents a chance to monitor their children's progress from week to week on the Edline Internet site.

School system officials said the brief system failure would not delay Thursday's scheduled release of old-fashioned, hard-copy report cards to students.

October 24, 2006

Perhaps They Should Have Tested More - Hart InterCivic

Virginia Ballot Glitch Chops Names

By Associated Press

October 24, 2006, 1:35 PM EDT

ALEXANDRIA, Va. -- U.S. Senate candidate James H. "Jim" Webb has lost his last name on electronic ballots in three Virginia cities where election computers can't cope with long names.

The glitch in Alexandria, Falls Church and Charlottesville also affects other candidates with long names, officials said.

Webb, a Democrat, appears with his full name on the ballot page where voters make their choices. The error -- referring to him only as James H. "Jim" -- shows up on a summary page, where voters are supposed to review their selections.

Election officials emphasized that the problem shouldn't cause votes to be cast incorrectly, though it might cause some confusion.

The mistake stems from the ballots' larger type size, election officials said.

It affects only the three jurisdictions that use balloting machines manufactured by Hart InterCivic of Austin, Texas.

"We're not happy about it," Webb spokeswoman Kristian Denny Todd told The Washington Post, adding that the campaign learned about the problem one week ago. "I don't think it can be remedied by Election Day. Obviously, that's a concern."

Every candidate on Alexandria's summary page has been affected in some way. Even if their full names appear, as is the case with Webb's Republican opponent, incumbent Sen. George F. Allen, their party affiliations have been cut off.

Jean Jensen, secretary of the Virginia State Board of Elections, pledged to have the issue fixed by the 2007 statewide elections.

"If I have to personally get on a plane and bring Hart InterCivic people here myself, it'll be corrected," Jensen said.

Hart InterCivic officials said Monday they intend to correct the problem by next fall. Jensen said Hart InterCivic already has written a software upgrade and recently applied for state certification to apply the fix, but the installation process can be time-consuming because of security measures.

In the meantime, Jensen said, the three affected jurisdictions have started educating voters and will place notices in each polling booth to explain the summary page problem.

Copyright 2006 Newsday Inc.

October 11, 2006

Perhaps They Should Have Tested More - Linden Lab's Second Life

Oops... virtual nudity!
(Imagine the bug report for that one...)



Oct 11, 2006

Sun Microsystems hosts virtual news conference on Second Life


By RACHEL KONRAD
AP Technology Writer



SAN FRANCISCO—
Sun Microsystems Inc. spared the stodgy PowerPoint slides when it announced its new gaming strategy.

Instead, 60 journalists, analysts and product developers from around the world sent their virtual proxies - known as avatars - to a simulated world on the Internet. The event, hosted by the avatar of Sun Chief Researcher John Gage and held on an island in the online game "Second Life," was billed as the first news conference by a Fortune 500 company in the game.

"Second Life" is a subscription-based 3-D fantasy world devoted to capitalism - a 21st century version of Monopoly that generates real money for successful players. More than 885,000 people have avatars who interact with one another in the virtual world.

"We've been trapped inside the text world for so long," Gage said. "It's time for us all to get more Second Lifey."

Santa Clara-based Sun, which develops hardware and software for corporate networks and for gaming servers, hopes its "Second Life" outpost will become a destination for 4 million people worldwide who help write Sun's open-source code. No more than 22,000 can make it to Sun's annual physical gathering in San Francisco.

"We'll have bean bag chairs, and it will be a great place for people to try out code," Gage's avatar said on an outdoor stage flanked by billowing trees and ocean. "We want it to be just like your local neighborhood."

Brands such as Toyota Motor Corp.'s Scion, Intel Corp., CNet Networks Inc., Advance Publications Inc.'s Wired magazine, Adidas AG and American Apparel Inc. have already been building "Second Life" outposts. In August, former Virginia Gov. Mark Warner became the first real-world politician to host a "Second Life" town hall meeting.

"What corporate presence within 'Second Life' allows for is a different type of immersion in the product," said Donald Jones, Georgetown University graduate student writing his thesis on "Second Life." "It provides the corporation with an opportunity to seem like they're cutting edge. It helps them sell their image and their lifestyle within cyberspace."

Sun's virtual news conference Tuesday wasn't entirely glitch-free. The avatar of Philip Rosedale, Linden Lab's founder and CEO, briefly appeared on stage naked because of a software bug.


October 8, 2006

Fall in New England

My wife and I took a trip to Stockbridge, Massachusetts yesterday.  It was a beautiful fall day in New England.  Trees changing color, blue skies, sunny.

We walked around Stockbridge center for a while and had lunch. 

Saw Alice's Restaurant ("You can get anything you want... at Alice's Restuarant.") and the Red Lion Inn.

Went to the Norman Rockwell museum.  Around back they have his workshop.  He had a nice view.


(The view from behind Norman Rockwell's studio.  October 7th, 2006.)


Fall in New England...
with the one you love... 
life doesn't get any better than that!

October 7, 2006

Best and Worst Technical Interview Questions

Recently, Esther Schindler visited SQAForums.com. She's writing an an article for DevSource.com, and as she occasionally does, came by to ask a question and gather ideas.

This time she asked us "What are the best and worst technical interview questions you have heard?" Here are my answers.

Worst Interview Question:
Any brain teasers.

This interviewing fad started a while ago, and got popularized by Microsoft, I believe. Now everyone thinks it's clever to ask "why are manhole covers round" or "how would you test this pencil" or other assorted puzzles.

I always do my best to answer truthfully and without sarcasm. And, while I may not always enjoy them, I'm reasonably good at brain teasers.

But, I usually follow up with a question of my own like "Have you found that people who are good at answering these brain teasers actually turn out to be better employees than those who aren't good at it?"

I have yet to find a potential employer who could honestly answer "Yes" to that question. Usually, they just mumble something about "we just wanted to get a sense of your thought process" and move on.

Sometimes I have to conclude that they just aren't very good at interviewing. I put that on the "potential problem" side of the mental checklist I always keep about prospective employers.


Best Interview Question:

"As a QA Manager - what keeps you awake at night?"

I found it to be a really good question, and led to some really deep discussion about what was important to this company.

I learned a lot about them, they learned a lot about me, we found out that we thought alike.

And yes, I did get hired.

And here's Esther's complete article:
http://www.devsource.com/c/a/Techniques/The-Best-and-Worst-Tech-Interview-Questions/

(It was pretty good, but she consistently spelled "Massachusetts" incorrectly. Don't they have editors for that sort of thing?)

September 30, 2006

Perhaps They Should Have Tested More - Pearson School Systems

Wrong grades sent out

By Lindsay Melvin
September 30, 2006

A Shelby County Schools computer glitch that miscalculated hundreds of interim report cards has staff working overtime and -- for a time -- had college-bound students biting their nails.
PowerSchool, the $800,000 Web-based student information system designed by Apple, was installed systemwide in August. It grabbed attention after nearly 400 inaccurate interim reports were sent home.
At Germantown Middle School, students flunked homeroom. Typically, they're not graded in homeroom, so you can imagine their surprise, said principal Russell Joy.
Across the county, teachers saving data from classroom laptops are facing slow systems that often save grades and attendance incorrectly.
Fortunately, "nothing has been lost," said Supt. Bobby Webb, who guaranteed there are backup copies of everything.
Operated by California-based Pearson School Systems, PowerSchool is intended to help track attendance, grades and about 40 other student categories, along with allowing parents to view their child's progress online.
Pearson president Mary McCaffrey will appear before the school board at 3 p.m. Tuesday.
Dealing with the brunt of the problem are the educators, said board member Ron Lollar.
"My concern is the frustration of the teachers," Lollar said.
PowerSchool is supposed to help gather information required by the state under federal No Child Left Behind guidelines.
Shelby County's technology department, as well as engineers from Pearson School Systems, are trying to smooth out problems. Specialists suspect it was a bug in the program and a new version has been installed. According to officials there has been some improvement but PowerSchool is still not up to speed.
If the software is not fixed by report card time next month, teachers will hand-correct any errors and those incorrect grades will not go on record, Webb said.
In some schools, they are days behind on attendance. Data entry clerks and teachers have been wrestling with the slow system, sometimes taking hours to enter just five students.
As teachers use their laptops to input 46,000 students each morning, the system jams.
Educators are left recording information on the school's old system or in attendance books.
According to the superintendent, the district's attendance-based state funding will not be affected by the delay.
The new software had high school seniors frustrated as they tried to get transcripts.
At the beginning of each school year transcripts are updated and printed, but nearly seven weeks into the school year students were not able to obtain them.
Some students seeking early college admission were given rough versions of their transcript. But several school counselors said the documents were an embarrassment. In some situations universities would not accept the rough versions.
The foul up left students like Jessie Andrews of Bartlett on edge: "I'm a senior and I'm looking at colleges and we need that stuff," said Andrews.
Without the transcript she hadn't been able to apply for scholarships, she said.
Christopher McGhee, also a Bartlett senior, was worried as well as he tried to get an ROTC scholarship and early admission to a Navy college program.
As of Thursday, school officials reported transcripts were updated and being printed off the district's old program.
Before PowerSchool was put into effect, it got a three-year test run at the elementary, middle and high school level. It worked great and was the top choice when compared to Chancery Software, which is now, but not at the time, owned by Pearson.

Webb says he plans to make some temporary hires to handle the heavy volume. Webb said he expects Pearson to pay any additional costs.

September 22, 2006

Perhaps They Should Have Tested More - Vancouver Airport

Software glitch triggered Vancouver airport scare

Updated Thu. Sep. 21 2006 9:34 PM ET

CTV.ca News Staff

CTV News Vancouver has learned that a massive shutdown at the airport on Sunday morning was due to a security software glitch, and not an apparent security breach.

At the time, the Canadian Air Transport Security Authority (CATSA) said there had been an alleged security breach -- involving an image in a piece of carry-on baggage.

After numerous calls, a CATSA representative told CTV News that an image of an explosive device, which turned up on a pre-boarding security X-ray, was actually a projection of a training image used to keep guards on their toes.

The training image had been erroneously activated in the software program, and CATSA still isn't sure how this happened.

Vancouver airport security guards saw a ghost image of an explosive device on their screens, but didn't realize it was part of the training system.

When they couldn't find a real bag associated with the image, they believed a passenger with a dangerous device might be headed for a plane, and ordered a complete shutdown of the airport.

"It was thought to be an explosive device," Renee Fairweather. "That's why the action had to be taken."

Airport traffic was halted for about two hours, and the cost of Sunday's delays is estimated to be in the $1 million range.

CATSA still isn't sure how the ghost image became activated on the Vancouver equipment. The national agency has been using this software program for three years.

"As a result of what happened Sunday, every piece of equipment that has that feature on it has had that feature deactivated... across the country," Fairweather said.

With a report by CTV Vancouver's Kate Corcoran

September 17, 2006

Perhaps They Should Have Tested More - Segway

Segway recalls all its scooters  Software problem caused rider injuries 

By MICHAEL P. REGAN
The Associated Press

September 15, 2006 8:00AM

The injuries that caused Segway Inc. to recall its scooters yesterday were not numerous, but they sure sound painful: Broken teeth, a broken wrist and some banged-up faces, including one that needed surgery to repair.

There were only six reported incidents in total, including one from a child, but the company believes they all stem from a glitch in the self-balancing, two-wheeled vehicle's software that - in rare instances - causes its wheels to reverse direction in a sudden, unexpected motion that can jerk riders off their feet.

Despite the bruised faces, Segway Inc. Chief Executive Jim Norrod does not believe the company's reputation will be left with a black eye.

"We don't see that it will have a negative impact on business at all,"Norrod said. Segway's network of distributors seems pleased with the way it has handled the recall, Norrod said, pointing out that the company figured out the problem on its own, without the prodding of regulators.

"Any injury is too much to us,"said Norrod. "This company has built its reputation upon its commitment to safety. From day one, that was and has been our goal."

The recall involves all 23,500 of the Segway Personal Transporters that the company has shipped to date. The U.S. Consumer Product Safety Commission, with whom Segway is cooperating on the voluntary recall, said consumers should stop using the vehicles immediately. The scooters were previously known as the Human Transporter.

Segway is offering its customers, which include more than 150 police departments around the world, a free software upgrade that will fix the problem. The upgrades will be done at Segway's more than 100 dealerships and service centers across the world, according to company spokeswoman Carla Vallone, and the Bedford-based company will pay to ship the devices to the appropriate center if need be.

It is the second time the scooters, which sell for about $4,000 to $5,500, have been recalled since they first went on sale in 2002. The 2003 recall was for the first 6,000 of the devices sold, and involved a problem that could cause riders to fall off the device when its battery ran out of juice.

Segway Chief Technology Officer Doug Field, who has been involved with the development of the device with inventor Dean Kamen since its earliest days, said the problem that sparked the latest recall was found while the company was testing its new model. He said a very unusual and specific set of conditions can cause the problem.

The scooter's speed is determined by how far forward users lean, and if the riders lean too far forward, a "speed limiter" pushes them back to keep the device at its maximum speed of 12.5 mph. The problem happens after the speed limiter tilts back, then the rider steps off the device and gets back on it quickly.

Field said the actions that cause the problem are of "very low probability, but possible, which then made us go pull every reported accident in the company's history." After the company found the six incidents believed to be related to the problem, it notified the CPSC and got the ball rolling on the recall, Field said.

Company officials would not comment on whether the problem has sparked lawsuits and would not give any details about the severity of the injuries sustained.

According to CPSC spokesman Scott Wolfson, the injuries included broken teeth, a broken wrist and facial injuries - including one that needed surgical repair.

Segway's dealers have received the software updates and owners can schedule an appointment through the company's website to have the update installed.

The company last month launched a new generation of the Segway that users can steer simply by leaning in the direction they want to go, rather than using a small wheel on the handlebar. All new shipments will have the corrected software.

Norrod, who was brought in as CEO last year by the company's principal investors, Credit Suisse Group and the venture capital firm Kleiner Perkins Caufield & Byers, has made grooming Segway for an initial public offering or sale of the company a top priority.

The privately held company has been secretive about its financial health, but the total number of vehicles recalled yesterday implies it has tallied up sales in the neighborhood of $100 million or more since the Segway's launch, about as much as the device reportedly cost to develop, not including operational costs since it hit the market. The company also sells modified versions of the Segway for use in robotics projects, and that has likely contributed a small amount to revenue. Those products are not subject to the recall.

The most famous tumble from a Segway came in 2003, when President Bush tried one out at his family's estate in Maine. The device went down on his first attempt to ride it, but Bush stayed on his feet with an awkward hop over the scooter. However, that incident had a different cause: Bush had not turned on the Segway.

06/13/08 - a followup.

Now, a lawsuit related to the defective Segways:

http://www.concordmonitor.com/apps/pbcs.dll/article?AID=/20080613/NEWS01/806130311

September 11, 2006

Perhaps They Should Have Tested More - Nuon Energy

Power surge fries appliances in Friesland

11 September 2006

AMSTERDAM — A 300-volt power surge in the electricity net has caused considerable damage in the north of Friesland Province.

A software glitch resulted in 11,000 homes in the town of Het Bildt receiving a much higher voltage than normal for 20 minutes on Friday.

By Monday there were dozens of reports from residents and businesses of broken televisions, microwaves, central heating boilers, modems and other electric devices.

Residents have inundated the local branch of electronics shop Expert with requests to fix or replace appliances. "We've had to say 'no' a lot," Expert's Janny de Vos said.

A spokesperson for energy company Nuon said on Monday that its subsidiary, network manager Continuon, will pay compensation.
Nuon will be deal with damage claims in a accommodating fashion, he said, but it may take weeks before the full extent of the damage is known.

August 26, 2006

Perhaps They Should Have Tested More - Affiliated Computer Services Inc.

Federal student loan program exposes data on 21,000 users

Linda Rosencrance

August 25, 2006 (Computerworld) -- The U.S. Department of Education has disabled its Direct Loan Servicing System, the online payment feature of its Federal Student Aid site, because of a software glitch that exposed the personal data of 21,000 students who borrowed money from the department, said Education Department spokeswoman Jane Glickman in an e-mail to Computerworld.
When a borrower was online late last Sunday or early on Monday, his personal information -- including name, Social Security number, birth date and address -- could have been exposed to another user who was also signed on at the same time and doing the exact same step, Glickman said.

The cause of the problem was a routine software upgrade by the vendor, Dallas-based Affiliated Computer Services Inc. (ACS), she said. The program had a coding error, and six Web pages were affected, she said.

ACS could not be reached for comment today.

"The identified Web pages have been disabled and are not going back online until we are 100% satisfied that this problem will not happen again," Glickman said. "The U.S. Department of Education takes the safeguarding of our users' personal information very, very seriously, and any compromise of users data is one incident too many."

Glickman said the number of borrowers possibly affected was less than one-half of 1% of its 6.4 million users. She said there have been no reports of identity theft. Borrowers who were online between Sunday night and Tuesday morning have all been identified and will be notified, she said.

Glickman said Federal Student Aid has a team of technical experts on site at ACS to ensure that the problem is fixed. ACS has agreed to offer credit services to the borrowers affected as long as necessary but for a minimum of one year, she said.

Glickman said the software upgrade went live on Sunday, Aug. 20, at 9:16 p.m., EST. The first three Web pages believed to be affected by the software error were disabled on Monday at 1:09 p.m.

"Ongoing analysis identified the exact software error, and three additional Web pages were identified to be impacted by the software error," she said. "These three additional Web pages were disabled on Tuesday ... at 10:16 a.m."



From http://www.dlssonine.com/

August 8, 2006

Perhaps They Should Have Tested More - KDDI Corp.

KDDI phones hit by e-mail glitch


Two mobile phone models sold by KDDI Corp. automatically switch off after sending or receiving certain e-mail characters, the major phone carrier said Monday.

The malfunction is blamed on a software flaw and KDDI sales outlets are now fixing the handsets when turned in by customers.

The phones are the W42CA model made by Casio Computer Co. and the W42H model made by Hitachi Ltd., which were sold between late June and July. Combined sales stood at some 96,300 units as of Saturday.

The mobile phones shut down after receiving or sending the "%," "n" and "S" characters.

The basic software was developed jointly by Casio and Hitachi.
The Japan Times: Tuesday, Aug. 8, 2006

July 30, 2006

Perhaps They Should Have Tested More - MBTA

MBTA glitch has riders feeling robbed

The MBTA's Charlie is ripping you off, sort of.
Mike of Winthrop explains about a glitch he found in the T's new automated fare collection system:

``Since I took a ride on the Silver Line (90 cents) I had an extra 35 cents on my CharlieTicket. No problem! I simply went to the machine and was relieved to see that there is an `add value' function which would let me add 90 cents so that I could take the T for $1.25," he wrote.

You know what's coming next.

``Every time I tried to add 90 cents, the machine rejected my card with the notice `Card Value does not have minimum of $1.25,' which means that `add value' does not work on any left over change below $1.25.

``Please have the MBTA write me a check for the money owed -- since it is lost to me -- 35 cents down the drain. I never would have used my card on the Silver Line!"

We thought this sounded fishy so we did our own little expensive experiment. Down to JFK/UMass we walked, up to a CharlieTicket vending machine, where we used the ``other amount" button to purchase a ticket for $1.60 to cover bus and subway fare.

We walked through the turnstile, turned around, walked back out the turnstile, stuck our ticket back into the vending machine and checked the amount on it, which was 35 cents.
And lo and behold, when we tried to add 90 cents in value to boost it back to $1.25, Mike was right. The machine would accept nothing lower than $1.25.

And there it sits in our wallet, 35 unusable cents.

(By the way, since there were no change machines around, we're also walking around with a hernia-inducing heap of dollar coins in our pocket.)

We talked to MBTA General Manager Daniel Grabauskas late Friday, who pledged that if the problem is a software glitch, it will be fixed.

``If it's an additional software programming change, and that's what our customers want, then that's what they'll get."

July 29, 2006

Patriots Training Camp 2006

Went to Patriots Training Camp in Foxboro today - lots of fun.


During the evening session we had a brief rainstorm. Afterwards this:



I consider this a good omen!

July 22, 2006

Thinking Like a Professional Tester

A recent incident at work got me thinking about characteristics that distinguish a casual tester from a professional.

Until recently, this company had no professional QAers. Any testing had been done by Product Management folks, and by customers during User Acceptance Testing. These people did the best they could, but they clearly had a lot on their plate - finding bugs wasn't their highest priority.

This time, one person was asked to go and delete a bunch of test accounts that had been added to a system. She deleted most of them but reported that some couldn't be deleted. She wasn't sure why, but they just "wouldn't delete". She didn't seem concerned.

I asked her to put together a list of the accounts which could be deleted and those which could not. When I saw the list, it appeared that all of those which couldn't be deleted had apostrophes in the name. I gave her a few suggestions and asked her to investigate.

It turned out that accounts containing special characters in the name could not be deleted. This bug had been there for quite a while. People testing the system in the past had noticed the problem, but hadn't understood why it happened. Even the customer had noticed the problem, but had just asked the developers to go into the database and delete the accounts.

To me, this points out a few differences between casual testers and professionals:
  • The ability to recognize that a bug is occurring
  • The desire to dig in and find out the characteristics of the bug
  • The skill to generalize the problem
  • The willingness to initiate a bug report

July 8, 2006

Perhaps They Should Have Tested More - Churchill Race Track

Saturday, July 8, 2006

Computers caused Pick 3 glitch

Surface change created confusion

By Jennie Rees
jrees@courier-journal.com
The Courier-Journal



Tuesday's Pick 3 snafu involving the seventh race was caused by a United Tote software problem apparently triggered when a new rule came into play for the first time, Churchill general manager Jim Gates said yesterday.



Churchill received permission last fall to change Pick 3 rules, specifically addressing the problem for when a race comes off the turf after wagering on a multi-race bet has closed.



But the situation had never occurred before the jockeys, after the fifth race, requested the seventh and ninth races come off the grass following a mid-afternoon downpour. By then, betting had closed on the Pick 3 involving races five through seven.



Under the new rule, every horse in the leg involving the surface change is considered a winner. Complicating matters were a scratch after the sixth race of a horse running in the seventh race and a scratch of a horse in the post parade for the eighth race. Those scratches resulted in consolation payoffs in the Pick 3s ending with the eighth and ninth races.



Gates said United Tote had tested the system to accommodate the rules changes and hadn't had any problems.



"…When they inputted the surface change into the system, it wanted to refund all of the wagers, which was not supposed to happen," he said. "In order to get that fixed, they had to shut down that system to make that change. That was the reason for prices not being posted until that evening after the races were over."



The ninth race was not a problem because the surface change was announced before betting closed on the Pick 3 beginning with the seventh race, he said.



Many fans were angered by having to wait until yesterday to cash, because Churchill was closed Wednesday and Thursday. Gates said those who bet at tracks or simulcast outlets that were open Wednesday could cash that day.

July 6, 2006

Generic Software QA Engineer Job Descriptions and Levels

Many companies use descriptions like these when determining pay levels for QAers.

Software QA Engineer Job Descriptions

Primary Responsibilities:
  • Debugs software products through the use of systematic tests to develop, apply, and maintain quality standards for company products. 
  • Develops, maintains, and executes software test plans.
  • Analyzes and writes test standards and procedures.
  • Maintains documentation of test results to assist in debugging and modification of software.
  • Analyzes test results to ensure existing functionality and recommends corrective action.
  • Consults with development engineers in resolution of problems.
  • Provides feedback in preparation of technical appraisals of programming languages, systems, and computation software.
  • Ensures quality computer integration into the overall functions of scientific computation, data acquisition, and processing.
QA Engineer Level 1
KNOWLEDGE:  Learns to use professional concepts.  Applies company policies and procedures to resolve routine issues.
JOB COMPLEXITY:  Works on problems of limited scope.  Follows standard practices and procedures in analyzing situations or data from which answers can be readily obtained.  Contact with others is primarily internal.
SUPERVISION:  Normally receives detailed instructions on all work.
EXPERIENCE:  Typically requires no previous professional experience.

QA Engineer Level 2
KNOWLEDGE:  Uses professional concepts; applies company policies and procedures to resolve a variety of issues.
JOB COMPLEXITY:  Works on problems of moderate scope where analysis of situations or data requires a review of a variety of factors.  Exercises judgment within defined procedures and practices to determine appropriate action.  Has internal and some external contacts.
SUPERVISION:  Normally receives general instructions on routine work, detailed instructions on new projects or assignments.
EXPERIENCE:  Typically requires a minimum of 2 years of related experience.  In some companies, the requirement will be less.

QA Engineer Level 3
KNOWLEDGE:  Uses skills as a seasoned, experienced professional with a full understanding of industry practices and company policies and procedures; resolves a wide range of issues in imaginative as well as practical ways.  This job is the full qualified, career-oriented, journey-level position.
JOB COMPLEXITY:  Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors.  Demonstrates good judgment in selecting methods and techniques for obtaining solutions.  Interacts with senior internal and external personnel.
SUPERVISION:  Normally receives little instruction on day-to-day work, general instructions on new assignments.
EXPERIENCE:  Typically requires a minimum of 5 years of related experience.  In some companies, the requirement will be less.

QA Engineer Level 4
KNOWLEDGE:  Having wide-ranging experience, uses professional concepts and company objectives to resolve complex issues in creative and effective ways.  Some barriers to entry exist at this level (ie, dept/peer review).
JOB COMPLEXITY:  Works on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors.  Exercises judgment in selecting methods, techniques, and evaluation criteria for obtaining results.  Internal and external contacts often pertain to company plans and objectives.
SUPERVISION:  Determines methods and procedures on new assignments, and may provide guidance to other personnel.
EXPERIENCE:  Typically requires a minimum of 8 years of related experience.  In some companies, the requirement will be less.  At this level, graduate coursework may be desirable.
QA Engineer Level 5
KNOWLEDGE:  Having broad knowledge or unique knowledge, uses skills to contribute to development of company objectives and principles and to achieve goals in creative and effective ways.  Barriers to entry such as technical committee review exist at this point.
JOB COMPLEXITY:  Works on significant and unique issues where analysis of situations or data requires an evaluation of intangibles.  Exercises independent judgment in methods, techniques and evaluation criteria for obtaining results.  Contacts pertain to significant matters often involving coordination among groups.
SUPERVISION:  Acts independently to determine methods and procedures on new or special assignments.  May supervise the activities of others.
EXPERIENCE:  Typically requires a minimum of 12+ years of related experience.  In some companies, the requirement will be less.  At this level, graduate coursework may be expected.

QA Engineer Level 6

KNOWLEDGE:  As an expert in the field, uses professional concepts in developing resolution to critical issues and broad design matters.  Significant barriers to entry (ie, top management review approval) exist at this level.
JOB COMPLEXITY:  Works on issues that impact design/selling success or address future concepts, products or technologies.  Often serves as a consultant to management and external spokesperson for the organization.
SUPERVISION:  Exercises wide latitude in determining objectives and approaches to critical assignments.
EXPERIENCE:  Typically requires a minimum of 15+ years of related experience.  In some companies, the requirement will be less.  At this level, a graduate degree may be expected.

July 4, 2006

You Shouldn't Jump to Conclusions

Here is a sanitized version of an interesting email conversation.

Everyone involved worked for the same company, but for different product lines (Product A and Product B).

Product B was used to track bugs, and included a new snap facility for attaching screenshots to Issue Reports. QAers involved with Product A used Product B internally.

A member of the Product A team concluded that Product B was buggy and shouldn't be used. He even recommended using a competitor's product! As you will read, he was a bit hasty in his conclusions.

--------------------------------------------------------------------------------
From: [Product A Architect]
Sent: Tuesday, June 27, 2006 1:17 PM
To: [QA]
Cc: [Product B Developer]

Subject: Images in [Product B] are very large

Most of the new issue I am seeing require long waits to show images that should be quite small but are being delivered as 1000x1000 (approx) BMP files.

I suspect this is the new screen snap facility in [Product B].

PLEASE don’t use this facility as it is buggy, seems to product HUGE files and probably takes up large amounts of db space. Perhaps I am wrong and this is simply a rendering technique in [Product B], but it does not look like it. If this is not due to the Product B and you are just pasting huge image files PLEASE stop.

I would suggest you start using [Competitor’s Product] for screen shots. It is fast, stable, cheap, allows time taken banners and very impressive in cabability.

[Product A Architect]
Development Manager / Software Architect

--------------------------------------------------------------------------------
From: [Product B Development Manager]
Sent: Tuesday, June 27, 2006 11:52 PM
To: [Product A Architect]
Cc: [QA]; [Product A Development Manager]; [Product B Developer]

Subject: FW: Images in [Product B] are very large

Hi [Product A Architect],

I understand your concern about image sizing and we are looking at this now. Thank you for raising this issue.

Could you please clarify what particular are of the most of your concern when you wrote, “PLEASE don’t use this facility as it is buggy?”

Thanks,

[Product B Development Manager]

--------------------------------------------------------------------------------
From: [Product A Architect]
Sent: Wednesday, June 28, 2006 1:15 PM
To: [Product B Development Manager]
Cc: [QA]; [Product A Development Manager]; [Product B Developer]

Subject: RE: Images in [Product B] are very large

:o)

Wouldn’t you call the creation of full screen images in BMP format (the most inefficient format available) when snapping small sections of image resulting in download waits long enough to prevent common use a bug?

Until that very significant problem is addressed, I would FAR prefer that our QA staff (for [Product A] anyway) not use the facility at all rather than produce unusable and space consuming images. A good screen capture facility is very inexpensive.

I have no specific complaints beyond that.

--------------------------------------------------------------------------------
From: Strazzere, Joe
Sent: Wednesday, June 28, 2006 6:36 AM
To: [Product A Architect]; [Product B Development Manager]
Cc: [QA]; [Product A Development Manager]; [Product B Developer]l; [Product Manager]

Subject: RE: Images in [Product B] are very large

The snap screen facility can indeed produce BMP files, but produces JPG files by default.

I haven’t seen that they are any larger than JPGs produced by other tools. Nor have I seen where it “takes up large amounts of db space”.

If there are bugs, someone should write Issue Reports, rather than just telling people not to use it because “it is buggy”.

The new snap screen feature is part of [Product B], and is now a shipping product.

-joe

--------------------------------------------------------------------------------
From: [Product A Architect]
Sent: Wednesday, June 28, 2006 1:27 PM
To: Strazzere, Joe; [Product B Development Manager]
Cc: [QA]; [Product A Development Manager]; [Product B Developer]l; [Product Manager]

Subject: RE: Images in [Product B] are very large

Perhaps this is just [An individual QAer’s] use of the tool? There is quite certainly a problem somewhere even if you are unaware of it’s existence. If I use a tool and find specific problems, I will forward the specifics to the folks who seem to be using it.

[Product B] is not a tool I work on and I am not involved in the QA or development of it. Like ANY other tool in use for development, if there is a problem in deployment, I will halt use of particular features of the product if it is causing problems. I quite frankly don’t care in the least who supplied the tool: if it does not work, I will ask those using it to stop the process that does not work. We will NOT continue to use it pending an issue resolution just because it is internal product. This is about effective management of a development process for [Product A] and not about the development process of the tool in use.

I do understand the concerns and needs of the [Product B] team, but since this thread exists, they do know about the problem I sensed within the product. If that perceived problem is resolved quickly or is based on a faulty understanding or the use of a separate tool by particular individuals, then we can resume use of that feature. Until the problem is isolated, please refrain from using the snap screen facility for [Product A] bugs since at this time it appears to have some problems.

--------------------------------------------------------------------------------
From: Strazzere, Joe
Sent: Wednesday, June 28, 2006 1:33 PM
To: [Product A Architect]; [Product B Development Manager]
Cc: [QA]; [Product A Development Manager]; [Product B Developer]l; [Product Manager]

Subject: RE: Images in [Product B] are very large

You are correct in your conclusion that some images attached to some Issues are overly-large.

You are incorrect in your conclusion that the snap screen facility is buggy.

[The individual QAer] was using MS-Paint, not snap screen.

-joe

Perhaps They Should Have Tested More - Manitoba Lotteries

Casino blames computer glitch for jackpot

WINNIPEG, Manitoba, July 4 (UPI) -- Two Canadian men are demanding a Winnipeg casino pay out the jackpot promised in error by a nickel slot machine.

The Manitoba Lotteries Corp. says the message that the men had won almost $210,000 ($190,000 U.S.) was a software error because the nickel machines usually do not have payouts above $3,000.

But attorney Josh Weinstein told the Canadian Broadcasting Corp. there was no sign on the machine giving a maximum payout. He says the men were promised 4 million nickels for successfully matching five numbers on the Keno machine.

"It's our position that it's not a mistake that my clients should be paying for, if it was a mistake," he said. "We don't have results of independent testing."



June 30, 2006

QA Leader's Checklist

So I'll be moving to a new job in a few weeks.

I'll be the Director of Quality Assurance at a company which has no QA department yet.

Since I tend to write a lot of lists, I've started a list of "things I need to learn, think about, ask about; people and groups I need to talk with". A few of these things are specific to my situation, but most apply to any QA Lead role.

I shared my initial list with my friends at QAForums.com.  They helped me expand the list even more.





Important Stakeholders
  • Customers
  • QA
  • QC
  • Engineering
  • Product Management
  • IT
  • Customer Support
  • Training
  • Marketing
  • Sales
  • Sales Support
  • Documentation
  • Management
  • Others

Products
  • Current Products
    • Manuals
    • Online Documentation
    • Training
  • Product History
  • Upcoming Products
  • Releases
  • Supported Releases

Technologies
  • Front End (Flash, AJAX, etc)
  • Application Servers
  • Database (SQL Server, Oracle, etc)
  • Operating Systems (Windows, Unix, Linux, etc)
  • Development Languages/Tools (Java, etc)

Platforms
  • Desktop Platforms
  • Server Platforms
  • Browser Platforms
    • IE
    • Firefox?
    • AOL?
    • Opera?
    • Other?
  • Supported E-Mail Clients
    • OutLook
    • AOL?
    • Other?
Processes
  • QA Processeses
  • Development Processes
  • Release Processes
    • When do releases occur
    • how often
    • Service Level Agreements
  • Product Introduction Processes
  • Backup Processes
  • Disaster Recovery / Business Continuance
  • Problem Escalation Processes
  • Hiring

Tools
  • Document Collaboration
  • Issue Tracking
  • Test Management 
  • Requirements Tracking
  • Specification Tracking
  • Time Management

Schedules

  • Release Schedules
    • Internal Releases
    • External Releases
    • Beta Releases
  • Meeting Schedules
  • Customer Commitments

Requirements
  • Functional Requirements
  • Performance/Load Requirements
  • Security Requirements
  • Regulatory Requirements
  • Release Criteria

QA Infrastructure

  • Lab
  • Hardware
  • Software
  • Test Automation
  • Load/Performance Tool
  • Issue Tracking
  • Test Plans
  • Reporting
  • QA Business Plan
  • QA Mission Statement
  • QA Vision Statement
Metrics
  • Current
  • Proposed
Domain Knowledge

  • Glossary of Terms
  • Training Classes
  • Reading Materials
  • Other sources

Budget

  • QA
  • Overall Engineering

Company

  • History
  • Directions
  • Phone Numbers
  • Web Site
  • Intranet/Portal
  • Facilities
  • Parking
  • Company Culture
    • Dress Code
    • Working offsite
    • Current culture of quality (or lack thereof)
    • Mission Statement
    • Vision Statement

General Office Procedures

  • E-Mail Addresses
  • E-Mail Lists
  • IMs
  • Payroll
  • Meeting Rooms
  • Supplies
  • Human Resources
  • Mailing Address
  • Business Cards

People
  • Who holds the power
  • Who are the influencers
  • Who will help you
  • Who will hinder you
  • Who do you need to influence
  • Who organises the social events

June 27, 2006

Puzzling Response to a Bug Report

One person on my team was testing a new Troubleshooting feature, which attempted to rank similar items together on a single page, based on their "relatedness".

The feature had no written requirements at all, and was only recently implemented and deemed "testable".

Some of the results produced by this feature were seemingly unpredictable.

So he wrote an issue report demonstrating what appeared to be totally inconsistent results.

The developer replied this way:
Rankings are not human understandable. Don't try. Operates as designed. Not a bug.
Bear in mind that this was an Architect-level developer, not some Entry-level coder.


When there are no requirements, any solution will suffice.

June 15, 2006

Win32 API IsTextUnicode() is not foolproof - a simple demonstration

From the Aftermarket Pipes blog (http://apipes.blogspot.com/2006/06/this-api-can-break.html)



Over at WinCustomize, someone thought they'd found an Easter Egg in the Windows Notepad application. If you:
  1. Open Notepad
  2. Type the text "this app can break" (without quotes)
  3. Save the file
  4. Re-open the file in Notepad
Notepad displays seemingly-random Chinese characters, or boxes if your default Notepad font doesn't support those characters.

It's not an Easter egg (even though it seems like a funny one), and as it turns out, Notepad writes the file correctly. It's only when Notepad reads the file back in that it seems to lose its mind.

But we can't even blame Notepad: it's a limitation of Windows itself, specifically the Windows function that Notepad uses to figure out if a text file is Unicode or not.

You see, text files containing Unicode (more correctly, UTF-16-encoded Unicode) are supposed to start with a "Byte-Order Mark" (BOM), which is a two-byte flag that tells a reader how the following UTF-16 data is encoded. Given that these two bytes are exceedingly unlikely to occur at the beginning of an ASCII text file, it's commonly used to tell whether a text file is encoded in UTF-16.

But plenty of applications don't bother writing this marker at the beginning of a UTF-16-encoded file. So what's an app like Notepad to do?

Windows helpfully provides a function called IsTextUnicode()--you pass it some data, and it tells you whether it's UTF-16-encoded or not.

Sorta.

It actually runs a couple of heuristics over the first 256 bytes of the data and provides its best guess. As it turns out, these tests aren't terribly reliable for very short ASCII strings that contain an even number of lower-case letters, like "this app can break", or more appropriately, "this api can break".

The documentation for IsTextUnicode says:

These tests are not foolproof. The statistical tests assume certain amounts of variation between low and high bytes in a string, and some ASCII strings can slip through. For example, if lpBuffer points to the ASCII string 0x41, 0x0A, 0x0D, 0x1D (A ^Z), the string passes the IS_TEXT_UNICODE_STATISTICS test, though failure would be preferable.

Indeed.

As a wise man once said, "In the face of ambiguity, refuse the temptation to guess."