diff --git a/Sources/AccessibilityDocumentation/Constants.swift b/Sources/AccessibilityDocumentation/Constants.swift index 37f509d..2a5dbd1 100644 --- a/Sources/AccessibilityDocumentation/Constants.swift +++ b/Sources/AccessibilityDocumentation/Constants.swift @@ -331,7 +331,7 @@ extension Book { //extension UIAccessibility.AssistiveTechnologyIdentifier { // // @available(iOS 8.0, *) -// public static let notificationSwitchControl: UIAccessibility.AssistiveTechnologyIdentifier +// public static let notificationSwitch Control: UIAccessibility.AssistiveTechnologyIdentifier // // @available(iOS 9.0, *) // public static let notificationVoiceOver: UIAccessibility.AssistiveTechnologyIdentifier diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/SwitchControl.md b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/SwitchControl.md index 050e627..120a47b 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/SwitchControl.md +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/SwitchControl.md @@ -10,7 +10,7 @@ Allows to connect external devices and link them to any command. As a result paralyzed people can control a phone by simple signals: finger movement, muscle stretches, etc. Also, a iPhone's camera can recognize facial expression or any sound like a command. In the end user moves focus on screen and pass command to focused element by submenu that is presented after selection. -![Switch control modes: focus groups on elements, cross selection and submenu](SwitchControlOverview) +![Switch control modes: focus groups on elements, cross selection and submenu](Switch ControlOverview) ## How to setup diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/VoiceControl.md b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/VoiceControl.md index e5f58a5..5036d72 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/VoiceControl.md +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/0. AssistiveTechnologies/VoiceControl.md @@ -1,6 +1,6 @@ # Voice Control -Adds additional commands over graphical UI to control a phone by voice commands. A user of VoiceControl can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speech, convert it to text and links command to elements' description. +Adds additional commands over graphical UI to control a phone by voice commands. A user of Voice Control can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speech, convert it to text and links command to elements' description. @Metadata { @PageImage( @@ -12,7 +12,7 @@ Adds additional commands over graphical UI to control a phone by voice commands. ## Overview -![Voice Control modes: with labels, enumerated elements or grid](VoiceControlOverview) +![Voice Control modes: with labels, enumerated elements or grid](Voice ControlOverview) [Video how to use Voice Control](https://www.youtube.com/watch?v=eg22JaZWAgs) diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/1. Basic/DescribeElements.md b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/1. Basic/DescribeElements.md index 4a297fd..d5f4a0d 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Articles/1. Basic/DescribeElements.md +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Articles/1. Basic/DescribeElements.md @@ -57,7 +57,7 @@ To describe element we can use label, value and trait (like a type) ### Label vs Value -Important to understand differences between label and value. Label should be as short as possible: Voice Control will use it as HUD over UI to name things for feature voice commands, but not show value part, because we expect that it's already presented for user on screen. +Important to understand differences between label and value. Label should be as short as possible: Voice Control will use it as HUD over UI to name things for feature voice commands, but not show value part, because we expect that it's already presented for user on screen. Otherwise, adjustable elements allow to change only value part and after change only value part will be pronounced to user. diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Getting Started.md b/Sources/AccessibilityDocumentation/Documentation.docc/Getting Started.md index 4643a0d..1fc9fba 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Getting Started.md +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Getting Started.md @@ -9,21 +9,21 @@ Accessibility part of UIKit and SwiftUI frameworks helps developers to represent There are three main assistance for blind or motion limited users: @Links(visualStyle: compactGrid) { - - - - - + - + - } - **** helps blind or low-visioned persons to use a phone by listening audio description of UI and command by different swipes and non-direct touches. Developer prepare text description of the element, iPhone will generate voice description from text. ![VoiceOver gestures](VoiceOverGestures) -- **** adds additional commands over graphical UI to control a phone by voice commands. A user of VoiceControl can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speach, convert it to text and links command to elements' description. +- **** adds additional commands over graphical UI to control a phone by voice commands. A user of Voice Control can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speach, convert it to text and links command to elements' description. -![Voice Control modes: with labels, enumerated elements or grid](VoiceControlOverview) +![Voice Control modes: with labels, enumerated elements or grid](Voice ControlOverview) -- **** allows to connect external devices and link them to any command. As a result paralyzed people can control a phone by simple signals: finger movement, muscle stretches, etc. Also, a iPhone's camera can recognize facial expression or any sound like a command. In the end user moves focus on screen and pass command to focused element by submenu that is presented after selection. +- **** allows to connect external devices and link them to any command. As a result paralyzed people can control a phone by simple signals: finger movement, muscle stretches, etc. Also, a iPhone's camera can recognize facial expression or any sound like a command. In the end user moves focus on screen and pass command to focused element by submenu that is presented after selection. -![Switch control modes: focus groups on elements, cross selection and submenu](SwitchControlOverview) +![Switch control modes: focus groups on elements, cross selection and submenu](Switch ControlOverview) > Note: Watch [Apple's playlist about Accessibility ](https://www.youtube.com/playlist?list=PLIl2EzNYri0cLtSlZowttih25VnSvWITu) for inspiration @@ -47,14 +47,14 @@ Step by step practice course. @TabNavigator { @Tab("Switch Control") { Allows to connect external devices and link them to any command. As a result paralyzed people can control a phone by simple signals: finger movement, muscle stretches, etc. Also, a iPhone's camera can recognize facial expression or any sound like a command. In the end user moves focus on screen and pass command to focused element by submenu that is presented after selection. - ![Switch control modes: focus groups on elements, cross selection and submenu](SwitchControlOverview) + ![Switch control modes: focus groups on elements, cross selection and submenu](Switch ControlOverview) } @Tab("Voice Control") { - Adds additional commands over graphical UI to control a phone by voice commands. A user of VoiceControl can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speach, convert it to text and links command to elements' description. + Adds additional commands over graphical UI to control a phone by voice commands. A user of Voice Control can see, but can't touch their phone, as a result he can pronounce commands lite "select Pepperoni", "tap purchase" or "close screen". iPhone recognizes speach, convert it to text and links command to elements' description. - ![Voice Control modes: with labels, enumerated elements or grid](VoiceControlOverview) + ![Voice Control modes: with labels, enumerated elements or grid](Voice ControlOverview) } @Tab("Voice Over") { diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/1. Basic/AdoptingCell.tutorial b/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/1. Basic/AdoptingCell.tutorial index 9d627be..e8189fc 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/1. Basic/AdoptingCell.tutorial +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/1. Basic/AdoptingCell.tutorial @@ -1,7 +1,7 @@ @Tutorial(time: 10) { @Intro(title: "Adopting Cell") { - To have assistive technology work as intented sometimes it is needed to **simplify complex cells** to such degree so there is no difference for accessibility features between *differentiated abstractions* that are stored in the cell. In other words, if there is a cell with pizza's description it is understandable to distinguish data by its nature: have an image as an illustration, a title, a list of ingridients and a price - but it complicates the work for VoiceOver, VoiceControl and SwitchControl. Such layout makes it adapt the cell's contents **wrong**. Let's take a look of what can be done to help our digital assistants navigate through the cognitive models we come up with,. + To have assistive technology work as intented sometimes it is needed to **simplify complex cells** to such degree so there is no difference for accessibility features between *differentiated abstractions* that are stored in the cell. In other words, if there is a cell with pizza's description it is understandable to distinguish data by its nature: have an image as an illustration, a title, a list of ingridients and a price - but it complicates the work for VoiceOver, Voice Control and Switch Control. Such layout makes it adapt the cell's contents **wrong**. Let's take a look of what can be done to help our digital assistants navigate through the cognitive models we come up with,. } @@ -208,7 +208,7 @@ ``` @Justification(reaction: "Try again!") { - **VoiceOver *doesn't* distinguish such pieces of data**, but **VoiceControl *has to have* price as a part of the element's label** in order to use it correctly. + **VoiceOver *doesn't* distinguish such pieces of data**, but **Voice Control *has to have* price as a part of the element's label** in order to use it correctly. } } @@ -221,7 +221,7 @@ @Justification(reaction: "That's right!") { - VoiceControl will have a simple label, VoiceOver reads the price before the ingredients (because it's more important). Good job! + Voice Control will have a simple label, VoiceOver reads the price before the ingredients (because it's more important). Good job! } } diff --git a/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/2. Advanced/AdjustableTutorial.tutorial b/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/2. Advanced/AdjustableTutorial.tutorial index 49b8abd..af50485 100644 --- a/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/2. Advanced/AdjustableTutorial.tutorial +++ b/Sources/AccessibilityDocumentation/Documentation.docc/Tutorials/2. Advanced/AdjustableTutorial.tutorial @@ -35,12 +35,12 @@ @Section(title: "Backward Compatibility") { @ContentAndMedia { - VoiceControl and SwitchControl work with separate *buttons* instead of *adjustable elements*. + Voice Control and Switch Control work with separate *buttons* instead of *adjustable elements*. } @Steps { @Step { - > Important: Adjustable elements are only used for VoiceOver and will break the behaviour of VoiceControl and SwitchControl. + > Important: Adjustable elements are only used for VoiceOver and will break the behaviour of Voice Control and Switch Control. Distinguish their behaviour in code by a dynamic getter: @@ -48,7 +48,7 @@ } @Step { - SwitchControl requires grouping: firstly focus will be placed on the group itself, afterwards the selection will be moved between elements of this group. It simulates navigation by reducing the number of elements on each level. + Switch Control requires grouping: firstly focus will be placed on the group itself, afterwards the selection will be moved between elements of this group. It simulates navigation by reducing the number of elements on each level. > Note: Watch video [How Grouping Simplifies Navigation](https://youtube.com/shorts/1l8H615EkV0?si=tKyhIGjBbR9XG9HP) @@ -62,7 +62,7 @@ What assistive technologies use adjustable trait? @Choice(isCorrect: false) { - SwitchControl + Switch Control @Justification(reaction: "Try again!") { @@ -84,7 +84,7 @@ @Justification(reaction: "Try again!") { - VoiceControl is mostly used by people who are able to see. In such case percieving elements as separate buttons is preferred. + Voice Control is mostly used by people who are able to see. In such case percieving elements as separate buttons is preferred. } } } diff --git a/Sources/AccessibilityDocumentation/UIAccessibility_.swift b/Sources/AccessibilityDocumentation/UIAccessibility_.swift index 1d92b6c..c54cfa8 100644 --- a/Sources/AccessibilityDocumentation/UIAccessibility_.swift +++ b/Sources/AccessibilityDocumentation/UIAccessibility_.swift @@ -636,15 +636,15 @@ public class Book { /** - Use UIAccessibilityIsSwitchControlRunning() to determine if Switch Control is running. - Listen for UIAccessibilitySwitchControlStatusDidChangeNotification to know when Switch Control starts or stops. + Use UIAccessibilityIsSwitch ControlRunning() to determine if Switch Control is running. + Listen for UIAccessibilitySwitch ControlStatusDidChangeNotification to know when Switch Control starts or stops. */ @available(iOS 8.0, *) - public static var isSwitchControlRunning: Bool { false } + public static var isSwitch ControlRunning: Bool { false } @available(iOS 8.0, *) - public static let switchControlStatusDidChangeNotification: NSNotification.Name = .init("") + public static let Switch ControlStatusDidChangeNotification: NSNotification.Name = .init("") /// Returns whether the system preference for Speak Selection is enabled