HackTheHearst Judging Criteria

General criteria

Entries will be judged on the following general criteria:

  1. Identification and presentation of compelling humanistic research questions. Your presentation must state what kinds of research questions and/or areas of inquiry your interface is developed to address. Who are your intended users? What are they looking for in the Hearst Museum collections information? How does your interface help them find the answers to their questions, or better understand issues regarding cultural heritage?

  2. Effective and compelling use of technology. Does your user interface effectively facilitate the investigation of the humanistic research questions you’ve identified? Is it visually appealing? Is it easy to use? Does it provide users with an appropriate number, variety, and/or sequence of options or steps for issuing and refining queries? (A single search box is a very easy interface to understand and use, but may not be sufficient to guide a researcher through an effective investigation of her questions or areas of inquiry.)

  3. Documentation. Is code appropriately structured, and commented and/or documented to a readable degree? Are steps necessary to deploy the interface and its dependencies, however complex or trivial, documented and cited so that a person with system administration skills but who is only marginally familiar with the technologies used can efficiently review and execute those steps?

  4. Adherence to the contest requirements. Groups or entries that fail to meet the contest requirements will be disqualified. If you have questions about the contest requirements, please contact the organizers.

Specific criteria

The specific judging criteria are as follows:

Presentation

These aspects of judging will count for 50% of the entrants' scores:

  1. Audience: has the team identified the intended users of the interface?
  2. Research question/ area of inquiry: has the team identified what the users are trying to do and what they're looking for in the Hearst Museum digital collections information?
  3. Does the interface address the above in an appropriate manner?
  4. Research/ inquiry effectiveness: how well does the interface facilitate addressing the identified research questions or area of inquiry?
  5. Is the research question or area of inquiry compelling from a educational, research, or museum perspective?
  6. Is it visually appealing?
  7. Did it pass the WAVE accessibility test? (yes/no—required to win)
  8. Does the interface meet contest requirements that it not be indecent, defamatory, in obvious bad taste, or disrespectful in any way? (yes/no—required to win)

Technical

These aspects of judging will count for 35% of the entrants' scores:

  1. (20%) How good is the UX presented by the submission? (i.e. are the UI components laid out in a logical way and of the right sort, encouraging discovery and easy to understand and use; is the flow acceptable and the presentation of results clear; is it easy to recover from user errors?)
  2. (20%) How good is the overall design and software architecture (is the modularization and design ecumenical? or is it on the other hand eclectic and hard-to-understand? Is there room to expand and develop the application, adding new functionality?)
  3. (15%) How pretty / visually complete is the submission? (Is the css bright and clear? Is the overall graphic design appealing?)
  4. (15%) How good is the code quality (coding style, inline documentation, clarity of implementation, tests? Is the code concise and understandable?)?
  5. (10%) How clever or original is the submission from a software point of view? (e.g. Has this been done before? Does it present a novel use of existing patterns and techniques? Is the UX something that is likely to produce new, unforeseen, or previously unobtainable results?)
  6. (10%) How quick is the response time? How long does it take to get a useful result?
  7. (10%) How easy will it be to scale the design and implementation support more users, more data, and more features? That is, how efficient is the submission now, and can the required performance be sustained were more demands to be made?
  8. (yes/no—required to win) Does the submission save state in a way that would make it difficult to ensure the freshness of the results (given that the datastore is normally refreshed nightly)?
  9. (yes/no—required to win) Are steps necessary to deploy the interface and its dependencies, however complex or trivial, documented and cited so that a person with system administration skills but who is only marginally familiar with the technologies used can efficiently review and execute those steps?

Judges discretion

The final 15% of the entrants' scores will be at the discretion of the judges, to account for aspects of the submitted apps that are not covered by the above-listed criteria.

Scoring

Each criterion will be scored 0–5 by each judge:

5 - Excellent; far exceeds criteria/requirements/expectations
4 - Good; goes beyond criteria/requirements/expectations
3 - Meets criteria/requirements/expectations
2 - Weak; falls somewhat short of criteria/requirements/expectations
1 - Poor; fails to meaningfully address criteria/requirements/expectations
0 - Absent or missing