Project Ideas

At ASSETS 2006, we were questioned by students and teachers about how they can get involved in the development of LSR. We've started receiving similar requests on IRC from students looking to contribute to the project. To invite development on the screen reader interface for LSR and the use of the LSR platform in new projects, we've created the the following lists of project ideas. If you are interested in tackling any one of these projects or would like to tell us about your own ideas, please send a post to lsr-list@gnome.org. We'll be happy to support you as best we can in your development.

Student Projects

The following projects are sized for a semester's worth of effort by individual or small teams of students. These ideas are good stepping stones for deeper involvement in LSR. Funding opportunities for continued student involvement may be available if a student proves him or herself capable on any of these projects.

Of course, we will welcome non-student developers on these projects as well.

Multiple magnifier zoom-regions

We have support for a single magnifier zoom region today. Explore how multiple zoom regions may be put to use in benefit of the user. For instance, imagine a primary zoomer working in conjunction with a secondary zoomer positioned over and magnifying an IM conversation.

Help dialog

The screen reader UI spec calls for a dynamic, contextual help dialog that provides information about how to use the focused widget, accessibility information from the focused widget, information about the current application, and information about what LSR keyboard commands are currently available. A student would develop an accessible GUI dialog (view) using glade to show this information and then create an LSR script (model) to populate the dialog with the appropriate information.

Search dialog improvements

We have a basic search dialog implemented, but it could stand a number of usability improvements. We also have a neat idea about quasi-modal searching based on ideas in Jef Raskin's The Humane Interface. We'd like to see this idea implemented as well. A student successful in getting the search dialog working might tackle the quasi-modal search next.

Screen reader UI improvements

Not all of the screen reader event reports and commands listed in the UI spec are implemented. A student would add these features to our existing BasicSpeechPerk.

Advanced Braille interface

Our BasicBraillePerk renders the current item on the Braille display along with the caret and continuation characters. More advanced features such as status cells may be desirable. Working on these features will require a good deal of knowledge about Braille.

Other application scripts

We have scripts for improving the usability of particular GNOME desktop applications. We could certainly use more. A student on this project would be responsible for identifying accessibility and usability problems in common desktop applications, and then writing scripts to remedy them.

Tool scripts

LSR supports the manual loading of scripts at runtime. These scripts can serve as _tools_ useful across applications and keyed for particular tasks. For instance, a "spell checker script" could do on-the-fly spell checking in any multiline text area for a screen reader user. As another example, a "Python coding script" could read lines of Python code more intelligibly when loaded. The latter example would facilitate script coding by people with visual impairments. A student could define their own tool script idea if neither of these proves interesting.

More device support

We currently support all speech engines pluggable in gnome-speech and SpeechDispatcher for text-to-speech output. We have support for GNOME magnifier and BrlTTY too. We also have support for the FMODEx mixer for spatialized sound and non-speech audio. Extensions supporting additional devices could expand the potential uses for LSR. For instance, joysticks and game pads might be interesting devices for teaching children how to use a screen reader. A student would work with us to identify and develop support for a new input or output device.

User and developer guides

We have a tutorial, code documentation, architecture spec, and UI spec for LSR developers and a dirt simple web page listing LSR keyboard commands for users. We need a more general guide for script writers and more complete documentation for end users. A student interested in technical documentation would be ideal for this project.

New Extension Projects

The screen reader user interface is just a set of LSR extensions. New extensions can be developed to create entirely new user experiences for LSR without changes to the LSR platform. For instance, the LSR platform could load scripts that monitor a web server log file and announce when important events occur (e.g. shades of "Your box is being hacked!") See our Linux Screen Reader ... is not just a screen reader screencast for details.

Audio memory aid

We have a script that can turn any text area into a todo list with audible reminders. Polishing this interface might make it useful for people with cognitive decline and their care givers.

Pictorial reinforcement

We have a script and dialog that can fetch images from Flickr representing selected text in any application. This interface might prove useful for people with learning disabilities or even people learning English as a second language. The script is usable is-as, but could be polished into a compelling tool with some effort.

On-Screen keyboard

There is a project underway to create a cross-platform (GNOME/KDE/OSX/Windows), on-screen keyboard. The LSR scripting environment is not tied to using AT-SPI or a particular GUI toolkit. It would be interesting to explore how LSR might serve as a platform for this project. All of the LSR innards for supporting dynamic loading of scripts and GUIs, accessing accessibility information, and controlling accessible widgets could be re-used in the OSK.

Research Projects

The following are research-oriented projects, perhaps suitable for graduate students.

End-user programming

Writing LSR scripts to improve accessibility currently requires programming knowledge. It would be interesting to explore how an end-user might "program" new scripts simply by interacting with a application. LSR can already see all of the events occuring on the desktop. If LSR could store and mine this information to model patterns of use, it could improve access to certain programs based on repeated workarounds and learned preferences. All of this work could be performed in a LSR script without requiring any changes to the LSR platform.

Attic/LSR/ProjectIdeas (last edited 2013-11-21 22:56:10 by WilliamJonMcCann)