Multiple Displays BoF
See the hacking/BoF days page for time and location information.
The majority of System76 customers—including large, well-known companies like Pixar and ESPN—are using GNOME with multiple displays. How can we make the experience in Mutter, Shell, and Settings better for these different contexts?
Contextual based displays, and how it relates to shell components and workspaces. I would like to and plan to dive into researched based design work at System76. What are immediate concerns, and what's the best way for me to communicate findings and designs?
We can make mixed DPI actually work pretty well using existing tooling and a little bit of glue on top of X. Given that, we do have some limitations in Mutter we are working around that make the process more complicated and a little more rough for the user. What specific things are we working around, and what could be done upstream in Mutter automatically?
- Mutter, GNOME Shell, and Settings developers
- Those interested/involved in UX design
People who are planning on or are expected to attend:
Currently the "only on primary" behavior is a mutter-level hack that treats windows on non-primary displays like "on all workspace" windows
Agree on two designs: proper way to behave in "presentation mode" versus "maximum real estate"
Per-display properties could be interesting (ESPN client with the TV as a presentation-mode display), instead of just global modes
Presentation context might be a bigger conversation than just displays: i.e. notifications, night light. Would be interested to explore further. Also how it works with the existing software ecosystem, i.e. LibreOffice Impress.
Detecting projectors is hard, but could lean on model numbers, EDID size, and a quirks file. Could also prompt in the case of uncertainty.
Lock screen issues: right now shows only on "real" primary display, i.e. internal. The last logged-in user's primary display should be the default.
Research System76 customers and see if we can synthesize into a few most common use cases and contexts, paying attention to the physical hardware and if/how it relates to those contexts. Could inform smarter defaults.
There may be Shell-UI things we could do better in these multiple display contexts, like workspace switcher.
Alt+Tab behavior (Alt+Tab, Alt+Above-tab, Alt+Esc) could use more thought, especially as it relates to multiple displays and workspaces. Downstreams are switching this behavior, i.e. so the windows don't focus all windows of the same app. Unity had a hybrid mode all in one (remembering three shortcuts is confusing). There was talk at the London design sprint to rework window switching to make more sense within the overall design of Shell, i.e. using the window picker.
Another problem space: what happens to windows when you switch contexts/attached displays. Does it stay with primary display? Whichever physical display it was on? Especially with laptops and closing/opening lids.
To communicate designs: come to GNOME Design cabal and discuss/propose things, decide where to go from there. For code, get involved as early as possible as well, let interested parties know what we're working toward.
CassidyJames research customers, synthesize data, and work with GNOME Design team on the future
Blog posts about the session: