Introduction

The Oxford University Museums, such as the Ashmolean, Pitt Rivers and Museum of Natural History, hold large collections of objects. Interventions, such as audio descriptions, make these accessible to the visitors with a visual impairment but 2-dimensional objects, such as maps, photographs and paintings, can still present challenges.

Touch Tours, where people with a visual impairment are able to touch items and special models created for them, are regularly run in the museums. A member of staff accompanies the group and describes the items as well as handing them out. The artefacts can be difficult to interpret through lack of context or description. A common way of achieving this accessibility is to use swell paper: black outlines and areas of an image are heat treated to provide a raised image, creating tactile images. Although these provide a touchable version of the image, on their own they are not interactive and do not support a narrative explanation. This project combines these two approaches so that re-usable audio is integral to the touch picture.

A project with the Oxford University Museums, Oxford e-Research Centre (OeRC), and the department of Experimental Psychology works to improve access to visual art works via audio and touch interfaces. The team had one developer from OeRC, three members of Museums staff and a researcher from Experimental Psychology.

The project is divided into two strands. In the first strand, the Museums, supported by the department of Experimental Psychology, are working to understand the way that museum collections are experienced by people who have visual impairments. This work is ongoing. The second strand was Research and Design, led by the OeRC, to determine how best to develop relatively cheap and efficient methods of integrating audio-description into the touch picture. The Research and Design was led by the findings of the research and user testing activity strand but also involved developing replicable approaches for integrating audio delivery into the tactile picture. Project outputs include an improved understanding of how to improve access to the art collections for the audience and a re-usable technology to deliver audio in a non-linear fashion to the audience within a gallery.

In this paper, we discuss the project’s development strand from the initial machine to the final prototype of this phase. Starting with the initial prototype, we discuss the challenges raised and the initial solution, the TouchTracker application. We consider both the prototyping and participatory approaches towards the TactilePicture prototype application [] and its web-based administrative interface to allow for the data to be uploaded and managed by museum staff.

Research Software Engineers (RSE) might expect challenges, such as understanding new domains and terminology, as part of any project. The major challenge in the project was understanding how the world is perceived with different senses, in particular touch, and contexts, such as confidence and training, requiring an experimental and a user driven approach.

Related Work

Various technologies, such as tablet systems or bespoke touch installations, were surveyed for their possible use at the beginning of the project against the aims. Firstly, it needed to be re-usable, secondly, it had to be updated by members of museum staff for new applications; finally, the device needed to allow the visitor to navigate through the audio at their own pace and order of points to be touched, and to find levels of information at their own pace.

Bespoke kiosks and touch installations have been created for specific pieces of work and exhibitions, such as the Perkins School for the Blind Talk Model and the San Diego Museum of Art Talking Tactile Exhibit Panel, but cannot be updated or easily shared. Other solutions, such as the Tactile Talking Tablet 2 required bespoke pieces of software to allow them to be updated or used in the way the museums require. Existing tablet technology, such as the Humanware tablet, does not track multiple fingers or in three-dimensions. It is not clear how this could be used to support this project. We wanted to provide an open source and re-usable solution to support a range of artefacts with minimal changes.

Milo and Reiss [] demonstrate an interactive map made from fabric, using capacitive sensors attached to a Bela board. The audio is triggered by touching an enabled area on the embroidered interface. In the presented version, the audio was taken from a Sound walk. It is not clear how easily the audio touch points can be updated and how easily it can be updated for new installations.

These projects supported our initial ideas using Raspberry Pi machines and sensors. They allow visitors to navigate the exhibit for themselves and the hardware and software are re-useable.

Initial Work

As the discovery activity began, we decided to build an early Raspberry Pi-based prototype using existing knowledge and use cases for iterative feedback in the Research and Development strand. The developer is familiar with audio and software engineering, but both hardware interaction, such as sensors, and developing for the visually impaired are new challenges. Initial discussions led to the creation of use cases and the identification of knowledge gaps and assumptions. During these conversations, there was a negotiation of terms and meaning between the three departments to understand the different areas – technology, museums and visitors with a visual impairment – as well as developing on low-powered devices and making simple sensors. We identified terminology and concepts that are not clear in the vein of domain-driven design [], converting the user cases into features using Behaviour Driven Development [, ], to develop an initial prototype using a Raspberry Pi and touch sensors attached to a surface.

Time was spent looking at various options for buttons that satisfied the hardware and user’s touch requirements. It quickly became apparent that touch was more nuanced than anticipated. We needed to understand not only what was being done but how. As a sighted person, it is extremely difficult to comprehend how people who have visual impairments use touch to explore raised images – ‘touch tiles’ – of visual art works. The given object, such as a rendering of painting or postcard, has to provide enough information yet not overwhelm the user with detail. As the audio is triggered by the visitor touching the item, the trigger point has to be responsive but not in an overly sensitive way. When the point was rubbed across, for example whilst trying to discern what is next to it or to gain more information about that part of the picture, then it was not to play the audio and be intrusive. However, it must not be difficult to trigger when it was required. Levels of vision and time of sight loss, familiarity and confidence in touch interpretation may also affect how the tactile picture is explored. Standard software engineering testing methods did not support these questions but considering them also required data from the discovery strand, encouraging us to change approach and become more user driven and experimental.

Touch perception and the TouchTracker approach

Over fourteen months, data was collected through focus groups and tours using interviews and observation on how touch is used when exploring the tiles. This included: its attentiveness to features, such as the shapes and textures provided; its exploring pattern, through the movement used and pressures applied; and its preferred touch tile material, paper versus plastic. As part of the testing, the same touch enabled image was created with different textures, line straightness, fineness, height and symbols on the same material. Swell paper was compared with similar items made with different plastics. The user responses helped to define requirements such as the tactile nature of the surface material and how it affects their experience. Alternate versions of the same image with different textures, such as the cobbles on the road represented by the large shaded area in Figure 1, were tested to understand how they affect the perception of the area and to design the tactile image.

Figure 1 

Swell paper image of J.M.W. Turner’s painting of Oxford High Street with the spatial co-ordinates captured by the TouchTracker application mapped on to it.

The developer joined some tours and focus groups to observe the way that objects were touched and noted differences in methods of exploring the tactile picture. Initial observations suggest that different movements and pressure were being employed to investigate the item. Some context could be derived from conversations, such as confidence with touch, levels of visual impairment and whether the person had been born with their visual impairment or had lost vision after birth. However, the question about the movement and pressure used to touch an exhibit remained.

When considering how we might get more detailed data on how the surface was touched, photography or video work were considered. These would require changes to the granted ethics permission as well as posing difficulties in identifying changes in movements. A simple touch object with paint underneath it to track the movements, such as placing a tactile (see-through plastic) picture on top of plexiglass and to observe from underneath how the tactile picture was explored. This would provide details on the movements but only tracked the movements of the fingers in two dimensions and not details on what pressure was used where in the picture. Information on pressure was essential to ensure that the trigger point could not be triggered unintentionally.

In answer to these issues, we developed an Android application, TouchTracker [], to model how a screen is touched when a tactile picture is placed on it. The application was used with focus groups over a period of 3 months. Each session lasted a variable amount of time, was around 20 minutes per participant depending on the number of users and the available time. The data shown in Figure 2 derives from a test session lasting about 10 minutes. Using the Android event object, our application records both pressure of the touch on the screen and movement across the screen using multiple points to provide more detailed data on the exploration pattern. A timestamp is given to the events when they are written to a log file. This application enables us to view touch through multiple axes with what exploring movements, e.g. fluency against rubbing, and with what pressure, measured on a scale the tactile pictures are explored. It also enables us to investigate exploration time per feature, e.g. for how long, measured in milliseconds, a certain shape is explored. A user’s finger movements are recorded when the tactile picture is placed on the screen. This allows us to test a variety of materials, e.g. swell paper versus plastic and their thickness, and how they interact with a screen, the practicalities these involves such as making sure the tactile picture does not move and the application stays active.

Figure 2 

A 3-dimensional view of the TouchTracker data showing the X and Y co-ordinates and the pressure applied to them. The data come from one test session with one person. The circle indicates a co-ordinate where extra pressure is applied.

A set of Python scripts extract and visualise some of the data from the log file. The first visualization mapped the motion of the touches across the surface of the tablet as a scatter plot. These movements have also been mapped over a copy of the tactile picture to demonstrate the points of interest to the user without the intervention or guidance of a staff member. The second shows the pressure applied by a digit over time, mapping up to 10 points simultaneously. The third, shown in Figure 2, is a three-dimensional image that shows the movement and pressure together, demonstrating the actions being performed. Although this work cannot consistently identify digits being used, it provides a simple tool to help tactile picture developers identify if it might be effective or not.

The visualisations show important aspects of touch, such as rubbing to gain further information or a rapid scanning to identify the shapes in the tactile picture. Figure 2 shows where the points that are difficult to recognize or identify have greater pressure applied, shown by circled area of a point with thick lines of increasing pressure below them reflecting the darker patches in Figure 1. These are not described in existing pattern languages such as the BBC’s Global Experience Language or Microsoft’s Fluent Design System. Although basic, the existing work allows the beginnings of algorithms for touch feature extraction to be described. Both this and the pattern descriptions remain ongoing work, but we hope that it may support the development of similar interfaces.

The combination of the data results and the observations of visitors with a visual impairment provided a new understanding of touch and its different types. Observation and qualitative interviews, conducted whilst the tablet is touched, provide context that quantitative data does not. This test application provides an experimental approach to the question of how touch is used to perceive tactile information and identifies areas that the user found interesting. We can also view the pressures and determine different patterns of interest for design and further development.

Showing nuances in touch that we had not anticipated, these observations and data supported a move from the original Raspberry Pi based prototype to a touch screen for the prototype, such as a tablet. Using the data from the tours, two prototypes were developed. Firstly, a web-based version was written in Javascript and the second using Android, chosen as the tablets available used the operating system and the developer had Java knowledge. Both had minimal functionality and were tested on an Android tablet for a discussion in a team meeting. We did this to allow all sides to understand the advantages and disadvantages of the options and to determine a path to take, rather than trying to develop parallel solutions.

This discussion clarified assumptions made regarding the environment. Apart from the environmental concerns of putting an audio emitting object into galleries with variable noise levels, the reliance on a Wi-Fi connection meant that the Javascript version was quickly ruled out. That version had some spots to help find the touch points on a screen, which are now included in the Android version. These points support the successful placing of the tactile images on the tactile points as well as aiding visitors who have some vision to find the audio points. As swell paper had been finalised as the touch tile material, a copy of the touch image was used for testing.

We found that the prototypes enabled the team to conceptualise the end goal and to clarify development misunderstandings or issues. This conversation changed the Android prototype into a minimal product using the tools and APIs that are to be deployed. Reflecting on the updated requirements and potential designs showed us that, in the time and skillset available, there were issues that were best solved in other fashions. A reconsideration of the use cases, such as supporting the staff to prepare the artefact for installation by making the end object easier to use, created new challenges for translating the points of interest onto the tablet. Some of the initial prototype code was re-used to create a web-based data management system []. The initial TactilePicture application use hard coded pixel co-ordinates as a stub solution. This has a disadvantage of locking the application to a default screen size and resolution, which had some undesirable side effects when a different model by the same manufacturer using the same screen size was tested. Although the devices are different, the grid used underneath the screen for both browser and the tablet use a similar co-ordinate system, leading to the solution of calculating the marking the relative co-ordinate in a JSON file. This can be imported into the TactilePicture application where the device adapts the positions to its own screen size and resolution. The prototyping approach provided a sense of confidence that it is “important to us as a design team was that we’d proven our design ideas” []. Even when criticism of the design or implementation affected development confidence, it was possible to remedy the situation and regain momentum through demonstrating the changes.

The observations and experiments refined the understanding of how the tactile information and the audio-description needed to supplement each other. This provided context for the prototyping approaches and redesign, but we needed to engage with users for feedback while making development changes to the TactilePicture [] application.

A participatory approach

A more participatory design approach [, ] allowed us to get feedback on how the application might be used and perceived by users. As well as the technical feedback loop, we gained user experience feedback as well. This encouraged us to use not only the best practices for accessibility but to also respond to an engaged focus group with a differing variety of abilities and tastes that would respond and comment.

The first application iteration began with using a set of simple trigger points that were linked to the audio recordings that described the parts of tactile picture surrounding this point, e.g. the Carfax Tower in the Oxford High Street, in Figure 1. In an initial test, the use of a single touch was viewed as too quick to provide audio, meaning that the points might be easily triggered whilst exploring. Following advice, this is changed to a double tap on each point, reflecting the user’s familiarity with Apple’s VoiceOver technology. Without the focus group, this suggestion would not have been forthcoming, supporting the development and the testing criteria used.

The next issue was the size of the point being presented in the interface for the user. The prototype has a map of pixels that align to the points on the swell paper. On their own, the pixels are far too small, so we create a coloured circle around each point of around 1.5 centimetres in diameter, based on mean index finger width. When a point is activated, the application searches for the audio to be played and which track is to be played according to the machine state. A second issue arose from this, when the audio took about 0.3 seconds to load so a .25 second sound was added as notification that activation had taken place as a functional requirement.

Following comments from the focus group, the audio was split into two layers. The initial set of requirements said that one audio track was to be played per activation point, though with the provision that this might be developed into two audio tracks per point: one for introduction, the second for a more detailed description. Both have the constraint of being split per trigger point but have to play in correct order and using the same double touch operation, rather than automatically following on. This provided a movement challenge if the visitor moves to a different trigger point and starts the new audio in the correct state, instead of waiting for the existing track to finish, or stopping it themselves. Watching the users and discussing some of the problems encountered during application development supported a focus on issues and features that benefited them.

If this participatory step had begun earlier as a more fully-fledged participatory design challenge, some of the issues reported above could have been thought through more deeply. The project had a small group of people who tested the software as potential users so one had to be aware of diminishing returns or developing for that population. It would have been beneficial to have had a simple cardboard prototype and a verbal protocol, rather than the initial prototype, to have begun the testing and refocused development from the user perspective much earlier in the project.

This participatory approach relies on a strong cohesion between the team members and a fairly clear set of goals per test. Although it provides the ability to respond to a change in requirements or uses based on the responses and feedback, our experience shows that tensions can arise when feedback is rapid if there is a delay in components or a misunderstanding in the focus of the tests across the project. As an interdisciplinary project, translation between domains and approaches emerged as a stumbling point. These approaches, despite their challenges, helped the project create an application for in-gallery testing.

The TactilePicture Application

The prototype application, TactilePicture, was deployed inside a stand in front of a painting that was converted into a tactile picture, that was different from the picture that had been used for testing. Left for two days, this provided a chance for a different group of visitors with a visual impairment, who did not take part in the development, to use the device whilst it is evaluated by a third-party user experience expert. The report outlined issues of which we were aware but also provided an external validation of the concepts and experience. From a developer perspective, it was useful to watch the device in a gallery environment to consider how the generated sound interacts in an open, public area and how the device acts and is used outside of the physical, non-laboratory testing, space.

Conclusion

We have presented the process taken to develop a prototype for the visitors with a visual impairment. Beginning with a hardware driven solution and observations, we developed an Android application to model how a tactile picture is touched. From this, we are able to understand the types of touch, i.e. movement and pressure, to help the ongoing development. In parallel, two prototypes were written to discover any other requirements and develop the prototype that was deployed in a gallery.

The prototype supported the requirements for the touch interface and how it interacted with the visitor. Taking an experimental approach to the issue of the communication helped to develop a better understanding of how touch perceives tactile information on both strands. An underlying theme is that we need to keep the user and their perceptions in mind. On reflection, the move to a participatory design approach in the development strand was beneficial and allowed for a deeper understanding of the issues faced by users, but it was late in the process. This should have taken place much earlier and with a prototyping activity to support the end design.

The team within which RSEs may be embedded or are working with have a specific understanding of terms and practices that need to be learned. The research strand on this project provokes ongoing questions about the role of RSEs in distributed teams, such as supporting the research work required for development. The tracking application provided a template for the application development as well as experimental support to the discovery strand. Not only did it provide an understanding of how a screen is touched, it has potential uses as a simple testing application for swell paper pictures and invites questions about larger design languages and how they represent touch applications. This creates an interesting testing problem that we were unable to solve within the project life: how to test different types of touch as part of developing for non-visual interaction.