Adventures in Technology Enhanced Learning @ UoP

Tag: accessibility (Page 2 of 2)

NEW Add-on for Grackle – Now Available for all UoP Students and Staff

You’ll be pleased to hear that Grackle – an accessibility checker for Google Docs, Slides and Sheets – is now easily available for you to use when creating documents so that everyone can enjoy your content.

Grackle is simple to use – login to your UoP Google account, select the Google document that you want to check for accessibility issues. Once you have your document open, select Add-ons from the toolbar and then Grackle > Launch from the drop-down menu.

Screenshot displaying 'Add-on' location

Grackle’s accessibility checker panel will then appear on the right-hand side of your screen and  produces a checklist of common accessibility problems and highlights any of these issues in your documents. 

The example below illustrates where Grackle has identified missing Alt text. However, the add-on offers so many more useful tips to improve the accessibility of your documents.

Grackle checklist screenshot

Take a look at this website to find out more about the handy features of Grackle.

Accessibility: Investigating Automatic Media Transcription


Accessibility is now an important aspect of digital learning. We need to take accessibility seriously both to satisfy the needs of an increasingly diverse student body and the requirements recently brought into law. Of course, digital learning often encompasses a wide variety of resources in a range of media. The challenge of bringing all these resources in line with regulations is considerable, both on a technical and organisational level. Fortunately technology can help to ease the burden, with a number of integrations available to help examine and fix content-related accessibility issues.

One particularly large challenge, and one that is particularly helped by the use of technology, is video. While it is possible to watch and transcribe a video manually, when faced with a library of nearly 8000 hours of video, the challenge becomes insurmountable! This is where technology can step in: it can automate the process and reduce the number of person-hours required.

For quite some time, YouTube has been able to automatically caption videos. In the past, however, the transcriptions produced by the algorithms have often been the subject of ridicule for the sometimes bizarre and hilarious interpretations. Thankfully things have moved on considerably, with increasingly advanced AI and machine learning helping to increase the reliability of computer transcription.

For the majority of our video content, we rely upon a home-spun system composed of a Wowza Streaming Media server and a custom-built front-end to manage content and metadata. While this system has the facility to allow subtitles to be added, it does not feature any way to automate the process of creating transcriptions. For this reason, we are currently investigating our options, with a view to either hosting our video content elsewhere or improving our current provision by implementing auto-transcription facilities.

The contenders

We have been investigating a few services to judge the accuracy of the transcription. We have tried each service with the same videos to see how accurately they can transcribe a variety of media content. Below are some details of three services we are currently examining.

Mozilla Deepspeech

An open-source option that can be run on-premises, Deepspeech requires a certain amount of technical skill in deploying and managing Linux servers. Being open-source and community driven, the more effort you put in, the better the output will be. It allows you to train your own neural network to increase the accuracy of transcriptions, so theoretically it would be possible to improve your transcription accuracy, although it may require a large investment of time and effort. As we are simply testing the out-of-box abilities, we have used the default models provided by the developers.

Google Speech to Text Engine

This is an API made available through the Google Cloud Platform. The service itself is used by YouTube to provide auto-transcriptions of uploaded videos. While using it through YouTube is free at the point of upload, utilising the API in your own projects can cause costs to rack up quickly (and remember that we have 8000 hours of video sitting on our servers, waiting to be transcribed). The pricing options are transparent, however, so we can easily calculate the cost of transcribing all of our existing content.

Amazon Transcribe

This cloud service is utilized by Amazon’s virtual assistant “Alexa” and works in a similar way to Google’s offering, with transcription charged based upon the number of seconds of content transcribed. The service is used by the content capture service Echo 360 to transcribe material. By our rough calculations, transcribing our 8000 hours of content through Amazon would be a little cheaper than through Google. 

The results

Here are some example transcriptions of one short piece of video content

Mozilla Deepspeech

so wee al seend apisode of the dragon tf dend where the ontroprenel holks in with a really great idea good looking numbers the dragons e recing out their hands and then one of the dragons pipes up let see your contract and os soddenly ontrepenelox exposed because they thought they had a contra they don’t what they have iser some verbal understanding your colercial contracts are really important to you business mey should be kept clear concise so the point to add value when seeking in bestment wor in ed if you come to sellin a business also commercial contracts areningportant to the void conslote because both sides of the contract should now wot their obligations are a more their rights are

Google Speech to Text (through youtube)

so we’ve all seen episodes of the Dragons Den where the entrepreneur walks in with a really great idea good-looking numbers the Dragons are eating out their hands and then one of the Dragons pipes up let’s see your contract and all the sudden the entrepreneur looks exposed because they thought they had a contract they don’t what they have is a some verbal understanding your commercial contracts are really important to your business they should be kept clear concise to the point to add value when seeking investment or indeed if you come to sell the business also commercial contracts are really important to avoid conflict because both sides of the contract should know what their obligations are and what their rights are

Amazon Transcribe

So we’ve all seen episodes of the Dragon’s Den, where the entrepreneur walks in with a really great idea, good looking numbers that dragons reaching out their hands. And then one of the dragons pipes up. Let’s see your contract over something. The entrepreneur let’s exposed because they thought they had a contract. They don’t. What they have is a some verbal understanding your commercial contracts of really important to your business. They should be kept clear, concise to the point. Add value when seeking investment, or indeed, if you come to sell the business. Also, commercial contracts are really important to avoid conflict because both sides of the contract should know what their obligations are, what their rights on.


As you can see from the output above, while the Mozilla software makes a good guess at a lot of the content, it also gets confused in other parts, inventing new words along the way and joining others together to form a rather useless text that does not represent what has been said at all well. I’m sure its abilities will improve as more time is spent by the community training the neural network. However, Google and Amazon clearly have the upper hand – which is not surprising, given their extensive user base and resources. 

While Amazon Transcribe makes a very good attempt, even adding punctuation where it predicts it should appear, it is not 100% accurate in this case. Some words are mis-interpreted and others are missing. However, in the main, the words that are confused are not essential to the understanding of the video.

Google Speech to Text makes the best attempt at transcribing the video, getting all words 100% correct, and even adding capital letters for proper nouns that it clearly recognises. There are options to insert punctuation when using the API, but this feature is not available in the YouTube conversion process.

From this (preliminary and admittedly small) test, it seems you get what you pay for: the most expensive service is the most accurate and the cheapest is the least accurate. Also, the headline cost of using Google Speech to Text on 8000 hours of video is not necessarily accurate. We need to remember that not all of this content is actively used: this is an accumulation of 8 years of content, and it’s possible that only a small fraction of it is still actually being watched. We now need to spend some time interrogating our video statistics to determine how much of the old content really needs to be transcribed. 

The best value compromise, if we choose to continue to host video ourselves, may be to transcribe all future videos and any that have been watched at some point in the last year. In addition, it should be possible to provide an ‘on-demand’ service, whereby videos are flagged by users as requiring a transcription at the click of a button. Once flagged, the video is queued for transcription and a few minutes later a transcription is made available and the user alerted.

Video title: Warner Goodman Commercial Contracts.
Copyright: Lynda Povey ( Enterprise Adviser) nest, The University of Portsmouth.

Image Credit: Photo by Jason Rosewell on Unsplash

Grackle for accessible Google Docs and Slides

I was chatting to an academic the other day. We were talking about the new tool in Moodle for automatically checking the accessibility of documents and providing alternative formats on-the-fly. It’s called Blackboard Ally and it’ll even give you step-by-step guidance on how to fix any accessibility issues (hint: click the meter icon for advice and guidance).

A screenshot of a Moodle site displaying the Blackboard Ally plugin and the accessibility icon next to a resource. The accessibility score is high

But wait they said, that’s fine for documents, Powerpoints and PDFs but I use Google Docs all the time, how can I improve those?

Ah, I said, you’ll need to use a bit of Grackle on that. It’s not for grouting your bathroom, it’s for fixing your Google Docs, so everyone can read them more easily.

Grackle comes in two flavours Grackle for Google Docs and Grackle for Google Slides. You just add the extensions to your Google Chrome browser and then launch Grackle from the add-ons menu as you’re creating your Google Slides or Documents.

A screenshot from Google Docs showing how to launch Grackle from the Add-ons, Grackle Slides, Launch menu item

Grackle produces a checklist of common accessibility problems and highlights any of these issues in your documents. It’s usually very straight-forward to fix them.

The most common issues are images without alternative text (descriptive titles of the images), poor contrast between text and background colours and lack of document structure / headings. These are easily fixed and Grackle will show you exactly where these problems appear in your documents.

A screenshot from Google Slides showing the Grackle accessibility advice panel.

Take a look at this website to find out more about the handy features of Grackle.

Disclaimer: Parts of this conversation may have been embellished for entertainment value…

Image Credits: Photo by Jason Leung on Unsplash

Jason Leung 

Print-based booklet to accessible online resource


As an online course developer I recently had the task of creating an accessible online version of a print-based Wellbeing information booklet which Marketing had produced, which as one might expect, was quite heavy on graphics and styling.

The first page provides a good idea to what the 28 page PDF copy looks like:

Wellbeing PDF booklet front cover

What is an accessible document?

An accessible document is both simplified in the literal sense in terms of visual presentation and in a technical one to meet specific criteria for accessibility software. The former involves making sure things like making headings and fonts clear/bold and images have proper alt text to explain what they are. Technical concerns are things like making sure the document has proper metadata, has a logical ordering of text for screen readers, and is properly tagged. These concerns can be addressed using common word processing software, such as in this case, Microsoft Word 2016 and Adobe Acrobat DC.


Checking for accessibility

To check the document, the Blackboard Ally plugin for moodle was used. The original document had a score of just 8%:

However, it should be noted that this number is derived purely from the number of occurrences of problematic instances,  rather than a rating of how readable it is per se. The Ally tool does provide some useful guidance on how to fix the errors, such as explaining what each issue is, what it means, and how to practically fix it.


Converting to alternative formats with Blackboard Ally

A great  key feature, as used on this task, is the ability to automatically attempt to convert the document to an accessible version of your desired format.

The HTML format was exported  in this case, and the result was a fairly good rendition of just the basic text with foreground images kept. However, there were some critical errors in the conversion which meant that more than simply a post editing tidy up was needed. For example, the information from the table cells in this document didn’t export in a structured format and so the information was completely unreadable.

Original (left) versus Ally conversion to HTML:

As a result the document would need some manual re-entering of text to finish it off.


Editing with Word 2016

To begin with, the Ally HTML output was opened in Word 2016 as good starting point. The first job was to just go through and make sure all text had been properly converted. This was the most time consuming part of the whole processed, followed by subsequent proofing.

With that done, it was then down to solve the technical issues using Word’s built in accessibility tools.

This tool checks things such as whether tags are added, and more subtle things like whether a table has a heading row, and/or whether it’s properly marked up as such for screen readers. This is mostly a case of clicking each fault and either automatically fixing it (as in the case of meta tags) or manually fixing (the table headings had been stripped by the Ally conversion and so had to be manually re-entered as heading rows).

Once everything has been checked/ticked off, it’s then a case of exporting it as a PDF document.

At this point the advanced saving options panel was used to make sure the PDF would have the correct accessibility features by adding structure tags:

Now, in an ideal world, this would be the end of the process. However, Word 2016 falls short right at the final step here as for some bizarre reason, it fails to add a title in the metadata. You can certainly add a title in Word, however, it seems to be ignored when converted to PDF. Due to this Adobe Acrobat DC had to be used to finished it off, but this was partly the original intention anyway in order to ‘run it by a 2nd pair of eyes’ so to speak.


Finalising with Adobe DC

To finish off, the outputted PDF was opened with Adobe Acrobat DC which has its own accessibility tools.

This flagged up a few more problems and was able to auto correct them. It also enabled me to add the title metadata and then finally export the finished PDF.


The finished result

A 100% perfect score in Blackboard Ally!

The resultant document should now be 100% compatible with related accessibility assistive software. The image shown here has been properly tagged with alt text to explain what it’s representing, and so is perfectly acceptable in an accessible document.

It’s important to remember that accessible documents do not necessarily have to be pure text. And whilst the focus here is ultimately making visual content accessible for the visually impaired, there’s no harm in adding a little well conforming colour.


Image credit:
Header Banner,, taken Feb 2019

Accessibility of digital learning content at UoP

On 15 January 2019, following a two-month pilot, the TEL team switched on the Blackboard Ally plug-in across all modules on Moodle. In brief, if a lecturer has uploaded some digital course content to Moodle (typically Word documents, Powerpoint presentations or PDF files) then Ally permits students to download that content in an alternative format (electronic Braille, html, epub, tagged pdf, or mp3). This is great for accessibility, of course, but this is also an inclusive approach: any student, not just one with a particular need, might choose to download a Word document in mp3 format (to listen to on the go) or in epub format (to get the benefit of reflowable text on a e-reader). The TEL team will be providing students with more information about Ally over the coming weeks, but in this post I want to mention a feature of Ally that is of interest to authors of digital course content.

Ally generates an institutional report about the accessibility of course content on the institution’s VLE. So we now know what the most common accessibility issues are for the 38,462 course content files on Moodle. The top five are (drum-roll please):

  1. The document has contrast issues. Just under half of all documents (48%, to be precise) have contrast issues.   
  2. The document contains images without a description. Roughly 43% of all documents commit this accessibility sin.
  3. The document has tables that don’t have any headers. Just over a quarter (26%) of all documents have this issue; I suspect that the documents without this issue are simply those without tables.
  4. The document does not have any headers. This is a problem for 24% of documents.
  5. The document is missing a title. Again, 24% of documents have this problem.

The first four are classed as major accessibility issues; the fifth issue is classed as minor.

At first glance this seems shocking: about half of all documents suffer from a major accessibility issue to do with contrast. When we compare ourselves against other institutions, however, we learn that these issues seem to be common across the HE sector; indeed, we seem to be doing slightly better than many institutions. And the important thing is, now that we know what the issues are, we can start to address them. Over time, we should be able to drastically reduce the number of documents with these common – and easily fixable – problems.

One piece of good news: we have a relatively small number of documents that possess accessibility issues classed as severe. The most common severe issue at Portsmouth – just as it is at other universities – involves scanned PDFs that have not been put through OCR. There might be good, valid reasons why a scanned PDF has been used. But accessibility would certainly improve if authors minimised their use of such files.

Header image taken from link. Retrieved from
(Assessed: 17th January 2019). Thank you to Ally for giving us permission to use their image.

Thinking about accessibility

I’ve been doing a lot of thinking about accessibility and Moodle recently as we move the Moodle Baseline project into the pilot stage. It’s become clear that many of us don’t make our responsibility to create accessible content a top priority when all that’s needed is a small amount of extra time to ensure a vast improvement in the ability of differently abled users to consume your Moodle content. I’m not going to call any specific Moodle sites out here, but some of the most prevalent bad practice that somewhat surprisingly still seems to exist includes using HTML tables for navigation & layout, and using images for headings or navigation.

Both of these issues become problematic to users who use screen readers. Whilst it is true that screen reader support for tables has improved, they should still not be used for navigation or layout. Every time the screen reader box enters a table cell, the screen reader will tell the user which cell they are in. You can easily see how this is not a good user experience if you have to work your way through a four by five table, with a link (or more than one link) in each cell. Using something more appropriate such as an HTML list for this navigation properly give the nav role in the html, as well as a more streamlined experience, a screen reader can also use this information to offer it’s user the option to skip the navigation and go straight to page content, or not. for some more information on this have a look at the W3 Schools page detailing the nav role.

Using an image for heading isn’t automatically a terrible thing. If it’s used in conjunction with either HTML alt text, or if at all possible an ARIA attribute to notify to a screen reader how the image is being used. Using CSS to replace a text link with an image, which will also allow the image to be seen by those browsing visually, but also mean the HTML text link is visible to those with a screen reader It just so happens that Bootstrap 4 has an easy way to do this which everyone can use after the Moodle upgrade in August. Bootstrap also offers ways to totally hide elements in your HTML content from everyone except those using screen readers, so you can really go the extra mile to offer content that’s easier to digest audibly.

There are reasons why you’d need to use a custom navigation, there are also times however that the topic jump list should be more than sufficient for navigating between topics on a Moodle site. If you find that this is almost good enough – but not quite – please talk to us and we’ll try and make it totally good enough for you to use. If you’d like to find out more about accessibility I would heartily recommend the Digital Accessibility MOOC on FutureLearn, it really opened my eyes to accessibility issues I’d never considered – it made me realise what I thought I was doing to enable differently abled people to read my content, wasn’t in fact enough.

I’ll leave you with this from our Moodle content guide which will arrived with the new theme after the upgrade in the summer:

Accessibility for Moodle content means that your content is available to be consumed by all users, regardless of their ability. Creating accessible learning content is the responsibility of us all – It’s not something that should be left until later, or for us to think that it’s the responsibility of someone else.

The Portsmouth Moodle – Accessibility Snapshot

In January 2018 we were lucky enough to host a visit from Alistair McNaught, a JISC subject specialist on accessibility and inclusion. Alistair spent a day at the University as a “mystery shopper”, playing the role of a student with disabilities who was trying to access various digital resources and services. He looked at the full range of services – prospectus, website, Library platforms and Moodle – but here I’ll focus on his observations about the VLE.

The first thing to note is that Alistair had difficulty logging on to a PC in the morning: it took more than ten minutes for the desktop to appear. The student sitting next to him confirmed that, after the initial boot, it often did take a long time before a public PC was in a state that allowed work to take place. Not good for a student with ADHD!

Alistair confirmed that tab order (for keyboard navigation) works well in Moodle and the visual tracking of focus is good. There’s easy navigation with breadcrumb trails and a navigation side panel; this is important because good navigation assists all users, especially assistive technology users. The Moodle accessibility block is available and obvious on all pages, and Equality and Diversity information is easily discoverable. The self-enrol E&D course has lots of very good, easily accessible, generic awareness-raising resources; and there are easy-to-find PDF resources on equality data – these have good reflow and colour change possibilities. All this is good news and it allows us to build on – in Alistair’s words – conscious competence.

However, there are some things we need to think about. For example, some of our third-party resources have accessibility issues; we are to some extent a hostage to fortune in these cases, but at least now we are in a position to raise the points with the suppliers. Another issue was that some of our generic units have poor colour contrast; Alistair pointed us to a tool – the Colour Contrast Analyser from the Paciello group – which will help us identify these problems more readily. And once we are aware of them, it’s easier to fix.

Alistair also took a look (with the consent of the academics involved) at a couple of teaching units from ICJS. He was highly impressed with the pedagogical approach taken in these units, and he praised a number of aspects. A “lovely human [video-based] introduction adds value for many students” – but he added that it “would be even better with transcript or captions”. It was “great to see active use of rich media and a nice visual key to resources”; the “direct links to reading resource and final assessment” were useful; and the “impressive range of resources” were “well organised” and had “clearly scaffolded teaching with explanations and pointers to the purpose of the resources”. Where resources could cause access issues this has been recognised and a genuine attempt made to remedy it with a PDF alternative (however, the PDF had its own accessibility issues and so does the ‘Click here’ link text). Finally, a Useful News and Information block showed “great currency, with tie-in to contemporaneous issues”. So, again, there is a lot of conscious competence on which we can build.

These units had some issues; fortunately, they are easily fixed. For example, hyperlinks need unique and meaningful link text so that assistive technologies that gather page links together can give users meaningful information. If an author writes “Click here to browse an interactive timeline of key events” then the result from assistive technologies might be a long list of “Click here”s – which is entirely uninformative. Much better to write: “Click here to browse an interactive timeline of key events”. Another problem came from an interactive Articulate resource that failed to load; even if it did load, Articulate generally produces output with limited accessibility. And some structures had untitled navigation elements, which would cause problems for some users. (This last issue might be down to an underlying Moodle template issue; Alistair pointed us to another tool – the HTML5 Outliner plugin for Chrome – that will help us investigate this further.)

All in all this was a tremendously useful visit. We know there are areas of good practice we can build on, and there are issues we can fix.  And it truly is worth pursuing this: if we take an inclusive approach to Moodle and the content on it, all learners will benefit.

Feature image title:  Web Accessibility Word Cloud by Jill Wright is licensed by CC-BY 2.0 on Flickr

Guest Blogger: Johny Cassidy – How technology enables me to do my job as a journalist

Johny Cassidy
Producer at BBC Business News

Johny’s Bio – Guest-blogger Johny Cassidy is a producer at BBC Business News and masterminded the BBC Disability Works week earlier this year. He tells us how using the latest technology enables him to do his job as a journalist – Johny is blind, so technology is an essential tool. He goes on to say that with greater understanding from employers, disability shouldn’t exclude anyone from the workplace.

“It’s fair to say that technology and the fast pace at which it’s developing has been pivotal to me in my job as a BBC journalist. It’s also not an exaggeration to say, that without it, I simply wouldn’t be able to do my job to the standard expected.

I began losing my eyesight when I was in my teens. The things that the majority of people take for granted slowly began to be taken from me. The simple pleasure of reading a book or accessing information became difficult, and then impossible. That’s when I began looking around to see how technology might be able to bridge that gap, which thankfully it has been able to do.

It’s vital for me as a business and economics journalist to have access to the same information as other people. A normal day will usually start and end with Twitter, which is all accessible for me on my iPhone through the voiceover function, which reads things on the screen out loud through headphones. Apple really changed the landscape for the majority of blind people when it first introduced the iPhone with this access technology built into the operating system as standard. Up until then I had to pay for a separate piece of software which was then integrated into the phone I was using. This was, whilst a useful tool to read texts and to access simple functions on the phone, a clunky solution which wasn’t really fit for purpose. The fact it cost over £600 also meant it was out of reach for a lot of people. Apple changed all that by levelling the playing field and by understanding the need for accessibility in their devices.

By using Twitter I’ve got a direct feed into the financial markets and business publications. I follow thousands of different feeds to ensure the information I’m getting is up to date and accurate. The fact as well that the majority of newspapers and magazines are also available to me on my phone means that, by the time my train to work gets into London, I am usually up to speed with the overnight developments in the business world. 

Once I’m in the office the phone is replaced by my laptop. Again this has a voiceover function which allows me to read the hundreds of emails waiting for me. I could of course do this on the train as well, but that time is usually set aside for gathering information.

Once emails have been waded through and either answered or deleted, I then start to look at what stories might be around for the next few days. As a forward planning producer I need to know what’s coming up in order to make sure the dozens of BBC programmes and outlets know what business stories they should be looking at. All these stories are held on a news diary which is, after a lot of collaboration with the developers, also accessible to me with the voiceover function on my laptop. This is perhaps one of the biggest problems I face at work. The myriad of different BBC applications, for news production, both TV and radio and online, all need to be accessible, which means it’s vital that I feed back into the developer teams if I come across any accessibility issues. All sounds pretty straightforward, but for an organisation as big as the BBC and with so many different points of contact, the process doesn’t always go as smoothly as perhaps it should.

Once stories have been identified for the next few days, it’s time to begin the meetings in order to explain to other producers and editors what they should be doing businesswise. My phone is again a really useful tool to take notes and have information I need at these meetings.

The BBC is a fantastic employer of disabled people. The fact that attitudes are finally beginning to change and that more employers are realising that disability isn’t a barrier to work means that the need for technology and solutions to problems will continue to be an ongoing battle. The biggest battle though is still trying to ensure that hiring managers understand that these solutions exist. Many disabled people have dozens of their own workarounds and socalled hacks they use every day in order to work or to simply live. The more people who know what technology as a tool can do in order to push more disabled people into employment the better.

I read a statistic recently which said that around 65% of the jobs our children will be doing when they reach working age don’t even exist yet. That’s a huge concept to try and understand, but it proves how fast technology is moving and the fact that we can’t stand still. We need to be constantly looking around for more innovative ways of harnessing technology in order to be able to do our jobs, whatever they may be.” 

Image credit: BBC News

Twitter: @johnycassidy

‘Disability Works: Breaking down barriers in business’ – article by Johny:

‘Tech Tent: Making tech work for everyone’ – article featuring Johny:

‘Disability Works Special’ – Tech Tent radio programme featuring Johny:

JAWS screen reader software:

‘Sit Down with Johny Cassidy’ – article featuring Johny:

Accessible documents – How easy it is to read your digital document?

What sort of question is that that you might ask – but to someone who is visually impaired and possibly using screen reader technology, documents can vary significantly in their usability. By creating documents with a few small changes it is possible to improve their usability for everyone.

When we download a file from a web page or receive an email attachment, the majority of us can do a quick visual scan of a document to find the information we want. However, if there is no structure (where a larger font size and bold text has been used for headings), those using a screen reader will need to read the whole document to find the relevant parts – a time consuming process with a long document. Searching for a particular word/words could also miss relevant information. Adding structure will make the whole document navigable – easily done by using styles to construct a hierarchy of headings that can then be used to create a table of contents.

It might seem a chore to have to set up styles before starting to write your document, but spending a few extra minutes creating styles for documents longer than a few pages could assist all readers, whether or not they are visually impaired. Once you have set up styles or modified existing default styles, you will then be able to save time in the future and use this file as a template for further documents.

Here are some of the most basic things you could do to make a document more accessible:

  • Use a clear sans serif font of at least 12 point such as Arial, Calibri or Helvetica
  • Use a hierarchy of heading styles to add structure to your document that will also enable easy creation of a table of contents:
    • heading 1 – title
    • heading 2 – chapters
    • heading 3 – sections
    • heading 4 – sub-sections
  • Create a table of contents for longer documents with sections
  • Avoiding adding a blank line at the end of paragraphs using a hard return on your keyboard – incorporate space after a paragraph using paragraph styles instead
  • Insert a page break at the bottom of a page rather than adding a few blank lines
  • Text that is aligned left and not justified is easier to read
  • Use high contrasting colours between text and the background
  • Reverse text (a light colour text on a dark background) is difficult to read so is best avoided
  • Busy backgrounds cause problems reading text so using a plain one is preferable
  • Use Alt Text (alternative text) for any images in your document to describe the feature for those using screen readers

Bearing these points in mind when creating a document should only take you a little extra time and just doing these few simple things could make a big difference in usability for the reader.

Some helpful links:

Ways to make your Word document more accessible:

The Accessibility Color Wheel allows you to try out different text and background colours:

Adding Alt Text to your Microsoft document:

Suggestions for creating accessible PDF documents:

So, next time you start a new document, why not think about making it easier to read!

Newer posts »

© 2024 Tel Tales

Theme by Anders NorénUp ↑