Testing new features on a per user basis

This relates to an accessibility issue I opened concerning the JAWS screen reader. This issue affects a small number of users and I’m thinking one approach is to allow affected users to activate a fix which they could then test and provide feedback.

Have issues like this come up before? Does FCC already have something in place to handle this? Or perhaps there are other ways for users to test proposed fixes?

This issue has now officially been confirmed as a bug in JAWS 2022. I’m not sure when it will be fixed and could potentially be around for a while. I have already created a fix for this on my local copy that removes the offending aria-roledescription attribute from the monaco editor. What I’d like to know is if there is something in place that allows us to selectively apply this fix.

We currently have a local storage field called isAccessibilityOn (need to double-check the name) which we use for some logic in the editor. I assume this could be leveraged?

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.