Font Testing Framework

moderated by Ed Trager

Numerous groups are now involved in creating FLOSS fonts for a diverse set of language communities: The Tibetan & Himalayan Digital Library, DejaVu, WenQuanYi and SIL International just to name a few stellar examples. Less stellar examples --to remain nameless-- also exist in sufficient abundance ;-)

Given that there are differences in font quality, how do FLOSS system vendors decide which fonts they should include in distributions or not? Numerous subtle problems exist. For example, just because a Free font has been found to work well on Windows does not guarantee it will work well, or at all, on a FLOSS platform. So testing specifically on FLOSS platforms is clearly a requirement. Up until now this responsibility has largely fallen upon the FLOSS system vendors themselves, to the extent that it was done at all.

Instead of having numerous FLOSS vendors do incomplete font testing using undocumented methods subject to unknown degrees of quality control, it makes sense to talk about creating standardized protocols and test suites for evaluating font quality and suitability on FLOSS platforms. Is a centralized consortium entrusted with this task needed? Which currently existing organization is best equipped to extend their mandate to include this task? Alternatively, should a new organization be created? Should the major FLOSS system vendors agree to pony up support money to support such a font testing group or lab?

Once an organization has assumed the mantle of serving as a central "clearinghouse" for font testing and presumably has the means to actually do it, the next question becomes, how do we actually test fonts? What are the most important criteria to test? Let's write them down.

Font testing must include both rigorous "hard" analytically-based evaluation (such as whether OpenType tables are structured correctly, etc.) as well as "soft" methods such as review by both professional type designers and by average users alike.

What tools are currently available for analytically-based "hard" testing? What tools can be extended? What tools need to be written? What should these tools do?

For "soft" testing, perhaps having people evaluate fonts by filling out a survey will work. What are the questions that need to be asked? For example:

  • How easy is it to read the font on screen at small sizes?
  • How well do people like the style of the font?
  • Are the glyphs culturally correct for your language environment? If not, which ones need fixing?

The effectiveness of soft review protocols can also be increased by having software tools that are specifically designed to assist in the process. What tools are available?

These are just a few ideas to get the discussion started ...

http://docs.scribus.net/index.php?lang=en&sm=setup&page=fonts2 describes the approach Scribus takes on fonts. For us it doesn't matter if a font works on Linux or Windows, it has to work on all professional image setting machines which can be found in printshops. If there's any problem, Scribus will rather convert the font to outlines than embed it in a PS or PDF document. That means some fonts are not available in Scribus which are available in other apps, but therefore we get very good responses from printshops. Just our 0.02 €.

Events/Summit/2006/TextLayout/FontTestingFramework (last edited 2013-11-25 17:42:53 by WilliamJonMcCann)