-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Improve screen reader accessibility with automatic focus and content announcement #4511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
qt/aqt/reviewer.py
Outdated
| sys.path.insert(0, os.path.dirname(ao2_dir)) | ||
|
|
||
| # Import and initialize Auto output | ||
| from accessible_output2.outputs.auto import Auto |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Including binary files or third party packages directly in the repo is not a good practice. Dependencies should be declared in qt/pyproject.toml
1bc347f to
e88fa07
Compare
|
I removed bundled binaries and declared accessible-output2 as a dependency in pyproject.toml instead. I also cloned my fork in an empty directory to test it. It should be good now. Update: I was added to the contributors file as Garrett Johnson. This name is not my real name. I changed this to my GitHub alias, but I can rebase with my real name if this is necessary. I also updated the PR to get rid of merge conflicts. |
233617d to
3318831
Compare
- Add aria-hidden to decorative mark/flag elements - Auto-focus show answer button when question displays (if no type-in field) - Auto-focus default ease button when answer is revealed - Add bridge commands: focusAnswerButton and focusDefaultEase Co-Authored-By: Claude Opus 4.5 <[email protected]>
Uses accessible-output2 package (from PyPI) to speak card content through the active screen reader on Windows, macOS, and Linux. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
1 and 2 are probably better split into separate PRs, but please wait for dae's feedback first. Speech synthesis will also need to be tested on different platforms and confirmed working properly with different card types. Right now it's not really helpful for cloze cards for example. Anki already has built-in support for text-to-speech that works well. We should ideally use the same implementation for consistency and better cross platform support. |
|
I understand, and am happy to re-implement these changes once the refactor is completed. I would also be happy to submit individual PRs for this. Regarding text-to-speech implementation: I appreciate that you highlighted this, as I was unaware of TTS features existing in Anki already. I would, however, like to point out that a lot of blind people prefer content being announced with their screen reader wherever possible. This is because we like to have the same voice read things out to us consistently. Screen readers have the ability to announce things differently, and they give you the chance of modifying those rules at any time for the individual user. I myself have a screen reader addon that switches languages on the fly when reading content - something Anki's TTS system wouldn't be able to do effectively and is vastly important for language decks. Accessible Output 2 also sends information to braille displays, which the built-in TTS feature wouldn't be able to manage. Braille displays are important to quite a number of people, including those who have both an auditory and a visual impairment. I do not have an auditory impairment, but I do use a braille display sometimes, so being able to receive information through both speech and braille carries a certain level of importance to me. In this case, Accessible Output 2 is the best option for outputting information to a screen reader. However, I would be willing to implement a fallback system which uses the built-in TTS, should Accessible Output 2 not be available on the users system. I can confirm that Accessible Output 2 works on Windows, Mac, and Linux. However, I do not believe it works on other platforms, which means this implementation would probably not work on Android. I'm not sure if this code base is used for the Android version, but if it is, this is where having a fallback system would be useful. That being said, I would not want to be rid of AO2 entirely, given it provides features that Anki's TTS system cannot. I am happy to discuss accessibility topics more, either here or on other official communication channels, if that's what you would prefer. In the meantime, I understand if you wish to close this pull request. |
That's a good point. Looks like accessible_output2 is not much code, so it might be easy to port it to the Rust backend for better integration in the future. Qt also has some support for accessibility that we can check if it works.
Yes, I think it's a topic that requires some discussion and feedback from target users, which are unlikely to be following GitHub updates. I recommend making a post on the forums and maybe linking to it in Anki subreddit for more visibility. |
This PR was created with help from Claude Code.
This PR adds two screen reader accessibility improvements:
1. Automatic Focus Management
aria-hidden="true"to decorative mark and flag elements2. Shift+R Card Content Announcement
Implementation Details
Files Modified:
qt/aqt/reviewer.py- Added focus bridge commands, Shift+R keybinding, and accessible_output2 integrationts/reviewer/index.ts- Added focus logic for answer button and content extraction/announcementFiles Added:
qt/aqt/data/accessible_output2/- Complete accessible_output2 package with MIT licenseTest Plan
./checkLicense Compliance
The accessible_output2 library is MIT licensed and fully compatible with Anki's AGPL v3 license. License file included at
qt/aqt/data/accessible_output2/LICENSE-ACCESSIBLE-OUTPUT2.txt.🤖 Generated with Claude Code