EDC / ePRO Layout Considerations

This post was triggered by a lively discussion in the Linkedin SCDM group – part of the credit goes to that group. The purpose I have here is to summarize the discussion and to share some of the related materials.

How important is the EDC layout?

In the paper CRF days, the CRF design was relatively flexible and free form. CRF elements could be placed flexibly on the paper and great consideration went into creating effective layouts. However, in the EDC era, systems have added their own limitations to what is possible. What is the impact of these limitations and why are some of the leading EDC systems so restrictive?

It is certainly more pleasant to use a system that is pleasing to the eye. The user interfaces of many EDC systems are fairly primitive when compared to the modern interfaces seen in many consumer systems nowadays. New devices, such as touch-screen tablet devices have entered the market and the user interface design for touch-device should be considered separately from a keyboard-and-mouse system. For example, buttons should be bigger and further away from each other as the finger is usually not quite as accurate as a mouse is. Some things, such as mouse-over features don’t really work very well in a touch-device environment.

Also the way in which the system is used is important when considering the layout. Many sites actually print out the eCRF pages, fill in the data on paper and then transcribe it into the system later. This kind of process can be taken into account in the design. Consider the behaviour of skip-fields, for example. If the EDC system only displays some fields based on another field being filled in, then this field might not appear in the print-out at all – although some tools have print features that allow the inclusion of all fields.

If data is transcribed from another system, it may be a good idea to try to mimic the layout of the source system. For example, if someone needed to key in lab results, keeping the order of the items the same would already be useful.

In a summary, the layout and the user experience of EDC in it’s traditional use is certainly important to require special attention. However, nowadays the lines between EDC, ePRO (and eCOA) are starting to blur and EDC companies are starting to venture in the ePRO world. This is where the layout becomes much more important. This leads me to the next question …

What to take into account when considering using an EDC system for ePRO

First of all, you need to understand ePRO, or get advise before trying this. For a long time, I’ve taken ePRO for granted and haven’t realized how much it is being trivialized by many who haven’t been involved and don’t understand the users and science of (e)PRO. There are basically two key elements to consider:

A) The users are completely different. With EDC, your users are trained professionals and paid to use such systems and likely to tolerate and deal with whatever system they’re provided with. With ePRO, your users are often patients with a serious condition and who already have to deal with the burdens of the study and the use of these systems needs to minimize any extra burden for them and preferably even decrease it with the use of context-sensitive instructions, reminders etc features. There is a lot of science behind achieving a proactive user experience for patients that instructs them step-by-step through whichever task they need to complete at any given time, while keeping the user interface as simple and easy-to-use as possible. All of the complexity is ‘under the hood’, not necessarily in the UI.  I have been teaching ePRO User interface design for a number of years,  but even after 12 years focused in the field, I am still learning.

B) It’s amazing how much effort, science, time and money goes into designing Clinical Outcomes Assessments (PROs, ClinROs, ObsROs, etc). If you want to know, just read through the FDA’s guidance for patient reported outcomes for labeling claims. It can cost millions and take years to come up with the original paper questionnaire, which can be difficult to comprehend because producing the questions in surveymonkey would take about 15 minutes. However, a lot of science goes into producing the exact questions and testing them with real patients to ensure they are measuring what they’re supposed to be measuring and that patients understand them. The layout, font, placement of the questions, the data capture elements, etc are very carefully considered and their psychometric properties are tested. This is almost always done on paper and when this is converted into electronic format, additional testing is needed. The extent of this testing depends on the intended use of the data and the extent of changes done compared to the original. If the system used to produce the (e)PRO is very restrictive, then these changes are likely to be bigger. If the system is not patient-friendly, it is likely to produce issues in usability testing.

This being said, there are scenarios where collecting patient data directly in EDC system is probably feasible already today and more so in the future. Many EDC companies are looking to enter the fast growing ePRO market and are upgrading their systems to better support this or even creating separate ePRO systems that plug into their existing platforms. There is a steep learning curve for EDC companies to enter this market and I have yet not seen anyone get it quite right.

Below are some materials that I have been allowed to share related to this topic. While I’ve been involved in some of this research, most of the credit goes to Paul O’Donohoe, CRF Health and Oxford Outcomes (and ICON company).

ISPOR 2012 Berlin_eCOA POSTER

Presentation Paul O’Donohoe CRF Health

What are the differing types of eCRF Designs between EDC Systems

We see typically 2 types of layouts across EDC systems; free form and list form. Of these, the list form is split into lists that allow questions to occur horizontally and those that do not.

Free form allows questions to be placed as required in any layout preferred – also called WYSIWYG – What You See Is What You Get.   This approach has the advantage that it can mimic the paper layout and it can group related questions into appropriate blocks. The downside to freeform is the level of complexity that this implies with a user interface.   Very often, an eCRF will include flags, queries, review indicators etc…  Placing these on a free form layout without compromising readability or usability can be difficult.   Visually – the user must scan across a whole page to see actions to perform. Sometimes things are missed or found to be too complex to process.

List form is easier to implement – especially in older browser technologies.  This is where the eCRF questions and labels and placed on one side of the page, and the responses are entered into the other.  The format is very much controlled by the application.  The only flexibility offered is the order of the fields fields. List forms are in some respects simpler to operate as the values and flags appear in a consistent place.   If you wish to add multiple queries to a question, you simply extend the page. In a free form layout, these need to be placed elsewhere or hidden behind an indicator.

Validating List versus Free form for PRO

As software designers, we see merits in both approaches.  The free form layout does present more flexibility in presenting the content in the appropriate format more closely representing the original. On the other hand, in today’s world of several different end terminals, the free-form, pixel-by-pixel design doesn’t really work that well due to the different resolutions out there. A dynamically scaling page, where the designer can only control the relative position of the different fields is more practical as it allows different devices use the same content and allows for the queries, etc to be presented on the screen in a sensible format.

With the dynamically scaling, ‘device-independent’ pages, instrument validation requires a different approach. Instead of testing the final design using specific hardware, they should be tested for appropriate usability and performance using standards. For example, the user interface may adapt itself for use in tablets and testing can be done against a pre-defined standard that defines things like minimum screen resolution, DPI, finger touch-screen or stylus-operation, etc. Similar standard and test can be developed for keyboard-and-mouse devices. Technically it will be possible to restrict the use of the system to only compatible devices that meet the minimum standards.

 

Leave a Comment