Equivalent Representations Of Multi Modal User Interfaces Through Parallel Rendering(1st Edition)

Authors:

Dr Kris Van Hees

Type:Hardcover/ PaperBack / Loose Leaf
Condition: Used/New

In Stock: 1 Left

Shipment time

Expected shipping within 2 - 3 Days
Access to 35 Million+ Textbooks solutions Free
Ask Unlimited Questions from expert AI-Powered Answers 30 Min Free Tutoring Session
7 days-trial

Total Price:

$0

List Price: $12.99 Savings: $12.99 (100%)
Access to 30 Million+ solutions
Ask 50 Questions from expert AI-Powered Answers 24/7 Tutor Help Detailed solutions for Equivalent Representations Of Multi Modal User Interfaces Through Parallel Rendering

Price:

$9.99

/month

Book details

ISBN: 9460185045, 978-9460185045

Book publisher: Katholieke Universiteit Leuven

Offer Just for You!: Buy 2 books before the end of January and enter our lucky draw.

Book Price $0 : Doctoral Dissertation Submitted In Partial Fulfilment Of The Requirements For The Degree Of Doctor In Engineering. Even Though GUIs Have Been In Existence Since 1974, Blind Users Still Face Many Obstacles When Using Computer Systems With A GUI. Over The Past Few Years, Our Daily Life Has Become More And More Infused With Devices That Feature This Type Of UI. This Continuing Trend Increasingly Impacts Blind Users Primarily Due To The Implied Visual Interaction Model. Furthermore, The General Availability Of More Flexible Windowing Systems Such As The X Window System Has Increased The Degree Of Complexity By Providing Software Developers With A Variety Of Graphical Toolkits To Use For Their Applications. Alternatives To The GUI Are Not Exclusively Beneficial To The Blind. Daily Life Offers Us Various Opportunities Where Presenting The UI In A Different Modality May Be A Benefit. After All, A Disability Is A Condition That Imposes Constraints On Daily Life, And Often Those Same Constraints Are Imposed By Environmental Influences. Current Approaches To Providing Alternate Representations Of A UI Tend To Obtain Information From The Default (typically Visual) Representation, Utilising A Combination Of Data Capture, Graphical Toolkit Hooks, Queries To The Application, And Scripting. Other Research Explores The Use Of Adapted UI Development Or Context-based Runtime UI Adaptation Based On User And Environment Models. All Suffer From Inherent Limitations Due To The Fact That They Provide Alternate Representations As A Derivative Of The Default Representation, Either As An External Observer Or As An Adapted UI. Based On The Original Design Principles For GUIs, This Work Shows That The Original Design Can Be Generalised Where A GUI Is Essentially The Visualisation Of A Much Broader Concept: The Metaphorical User Interface. Expanding Upon This MUI, A New Definition Is Provided For "Graphical User Interface"". The Well-known Paradigm To Provide Access To GUIs Rather Than Graphical Screens Has Been Very Influential To The Development Of Assistive Technology Solutions For Computer Systems. Validation For This Paradigm Is Presented Here, And Based On The MUI Concept, The Focus Of Accessibility Is Shifted To The Conceptual Model, Showing That Access Should Be Provided To The Underlying MUI Rather Than The Visual Representation. Building Further On The MUI Concept, And Past And Current Research In Human-Computer Interaction And Multimodal Interface, A Novel Approach To Providing Multimodal Representations Of The UI Is Presented Where Alternative Renderings Are Provided In Parallel With The Visual Rendering Rather Than As A Derivative Thereof: Parallel User Interface Rendering. By Leveraging An Abstract User Interface Description, Both Visual And Non-visual Renderings Are Provided As Representations Of The Same UI. This Approach Ensures That All Information About UI Elements (including Semantic Information And Functionality) Is Available To All Rendering Agents, Eliminating Problems Such As Requiring Heuristics To Link Labels And Input Fields, Or Seemingly Undetectable Elements. With The PUIR Framework, User Interaction Semantics Are Defined At The Abstract Level, Thereby Ensuring Consistency Across Input Modalities. Input Devices May Be Tightly Coupled To Specific Renderings (e.g. A Pointer Device In A Bitmap Rendering), But All User Interaction By Means Of Such A Device Maps To Abstract Semantic Events That Are Processed Independent From Any Rendering. The Novel Approach Presented In This Work Offers An Extensible Framework Where Support For New Interaction Objects Can Be Included Dynamically, Avoiding The All Too Common Frustration That Results From Needing To Wait For Assistive Technology Updates That Might Incorporate Support For The New Objects.