This observational, non-randomized, single-arm pilot study aims to assess the acceptability, feasibility, usability, and implementation fidelity of a co-designed multi-component digital health intervention to support the management of multiple long-term conditions (MLTCs) in Government primary health care settings. The study is conducted among adult patients with MLTCs attending rural primary health centres and the primary care providers (Medical officers and Staff Nurse) delivering services at these facilities in India and Nepal. The intervention comprises an electronic decision support system (EDSS) to facilitate evidence-based clinical decision-making, assisted telemedicine model to enable timely specialist consultations, and a patient-facing mobile application-supported by community champions to enhance care coordination, self-management, and treatment adherence. Participants will engage with these components over a three month implementation period and will complete surveys and qualitative interviews, alongside routine supervision checklists and system-usage analytics, to generate implementation and usability data. The study will be implemented across six rural primary health centres in Jodhpur (Rajasthan) and Anakapalli (Andhra Pradesh), India, and Kathmandu, Nepal, enrolling approximately 30 patients per site along with all participating health care providers. Findings from this pilot will inform refinement of the intervention, study tools, and implementation strategies, and will provide critical evidence on contextual adaptability to support the design of a subsequent cluster randomized controlled trial under the NIHR Global Health Research Centre for Multiple Long-Term Conditions.
\- Implementation framework and study design: The pilot study uses a cluster-based, non-randomized design in rural primary health centres (PHCs) to implement an integrated digital health intervention for people with multiple long-term conditions (MLTC). It is conducted over \~3 months at selected PHCs (for example, two in Andhra Pradesh and two in Rajasthan, India, plus sites in Nepal). The intervention package comprises: (i) an Electronic Decision Support System (EDSS) to incorporate evidence-based MLTC management into PHC workflows; (ii) assisted telemedicine (a fixed PHC "hub" model and a portable "backpack" kit) to link patients and health workers with remote specialists; (iii) a patient-facing mobile application to support self-management (education, medication/appointment reminders, and messaging); and (iv) trained community health champions to bridge the health system and community. * Co-Design and intervention development: The core intervention components were iteratively co-designed with stakeholders across three sites in India (Jodhpur, Rajasthan; Anakapalli, Andhra Pradesh) and one in Nepal. Over 15-18 co-design workshops were conducted between December 2024 and early 2026, culminating in a national synthesis workshop in New Delhi. Participants were stratified into stakeholder groups to ensure broad representation: Group A (patients with MLTC and their caregivers/community representatives), Group B (primary healthcare providers, technical experts, and researchers), and Group C (policy makers/district/state officials). Workshops were held in accessible community venues (and online for policy makers) with careful advance mapping and consent of participants. Trained facilitators guided semi-structured discussions using journey mapping, brainstorming, voting/prioritization exercises, and live demonstrations of prototype technologies. These activities elicited user needs and system requirements which directly shaped the intervention package. Group A workshops (patients/caregivers) identified critical user preferences (e.g. trusted provider communication, self-care support, and community champions) and barriers (disappointment with fragmented care, out-of-pocket costs). Group B workshops (providers/experts) yielded practical design recommendations, such as integrating clinical guidelines into workflows, incorporating drug-interaction alerts, and defining standard teleconsultation formats with language and trust considerations. A joint workshop with both Groups A and B validated and prioritized intervention features: for example, "must-have" features included an editable EDSS dashboard, simple app navigation in local languages, offline data entry, and a reliable telemedicine referral pathway. Feedback on the patient-facing application emphasized low-literacy formats (audio/video, SMS/IVR options) and event-triggered reminders. Throughout, emerging insights were documented and fed back into design cycles ("design" and "adapt" phases of the ADAPT framework), ensuring that the EDSS algorithms, telemedicine workflows, and mHealth app reflected local context, language, and health system realities. In summary, the co-design process ensured that the intervention components are grounded in stakeholder experience and health system constraints. The final intervention package consists of an Electronic Decision Support System (EDSS), assisted telemedicine models (facility-based and portable "backpack" models), and a patient-facing mobile application, complemented by trained community champions and strengthened referral pathways. The co-design phase also produced stakeholder engagement structures (e.g. community advisory boards) and preparatory materials (training modules, user manuals) that will underpin implementation. All technical specifications (algorithm logic, user interfaces, data flows) will continue to be refined through iterative feedback during the pilot phase. * Workflow Integration at PHC Level: The EDSS is used as part of routine outpatient management rather than an add-on. Nurses and officers are instructed to use the system during normal clinical hours (e.g. during patient intake and consultation). For each patient encounter, PHC staff complete all mandatory fields in the EDSS before submitting the encounter. Usage logs (timestamps of logins, data entries, referral triggers) are captured continuously on the DigiSetu back-end and synchronized daily, creating an audit trail. Supervisors review log data weekly to ensure adherence to protocol. To support these workflows, standard operating procedures (SOPs) have been developed for each task. SOPs detail: (a) Case identification and case-mix classification (how to use the screening tool and record diagnoses); (b) Data collection protocols (guidance on REDCap and EDSS data entry, use of unique patient IDs); (c) Telemedicine workflow (criteria for tele-referral, scheduling process, documentation of consult notes); and (d) Patient app enrolment . These SOPs were co-created with implementers and iteratively refined during pilot workshops. For example, telemedicine SOPs explicitly define "who to refer" (e.g. uncontrolled hypertension or diabetes after 3 medication trials) and "when not to refer" (e.g. acute emergencies). All staff nurses and MOs receive printed job aids summarizing key steps for each component (screenshots of EDSS pages, referral algorithms, consent checklists), which are reviewed during training. * Training and capacity building: All healthcare providers in intervention PHCs (medical officers, staff nurses, auxiliary nurse-midwives) will undergo comprehensive training on the intervention components. The initial training consists of a 3-4 day in-person workshop, co-facilitated by clinical, public health, and digital health experts. The curriculum was co-developed by a multi-disciplinary Course Advisory Committee (45 members including clinicians, technologists, and community representatives) to cover: MLTC care principles, EDSS operation, telemedicine processes, and patient app overview. Training methods include lectures, interactive demonstrations of EDSS and app mock-ups, hands-on practice in simulation labs, and case scenario role-plays. Pre- and post-tests assess knowledge and confidence. A cascade training model will be employed: initially, "master trainers" (e.g. site investigators, district NCD programme officers) receive intensive instruction, then they train the PHC teams locally. State health authorities are engaged from the outset to embed the training into routine NCD programme capacity building. Custom training manuals and quick-reference job aids (in local languages) were developed and distributed to all trainees. For example, printed flowcharts outline the step-by-step process of a telemedicine consult or patient enrollment in the app. Training attendance and performance are tracked via checklists. In the initial pilot phase, 27 PHC staff (mostly nurses) completed the pilot training with post-training evaluation; similar numbers will be trained in Nepal. Refresher sessions are scheduled at 3 months, supplemented by on-site mentoring visits from research staff. Beyond initial implementation, ongoing capacity building is integrated into the project. Primary Health Centre teams participate in monthly learning sessions with research staff, sharing challenges and solutions. A district-level supervisory structure is in place: each PHC is paired with a mentor (a senior nurse or physician) who conducts quarterly site visits to review fidelity checklists, observe practice, and provide feedback. In parallel, research field coordinators receive training in Good Clinical Practice (GCP), data management, and participant engagement, with continuous skill-building over the course of the study. Community Champions and members of newly formed Community Advisory Boards (CABs) at each site (60 members across 6 pilot PHCs) also undergo training in MLTC awareness and community engagement strategies, ensuring local ownership and sustainability. * Intervention Components and digital architecture: The EDSS is built on the CCDC's DigiSetu platform, expanding prior modules (hypertension, diabetes, CVD) to cover MLTC-relevant conditions (e.g. asthma, osteoarthritis, mental health, sensory impairments, substance use). It provides a structured clinical workflow at the PHC: nurses enter patient vitals, history and lab results into the EDSS; the system generates guideline-based treatment plans; and medical officers review, override if needed, and finalize management. The EDSS features an at-a-glance dashboard showing key diagnoses, risk status, pending follow-ups and alerts for missed visits or deterioration. Key design features include offline data entry with automatic syncing (for low-connectivity settings), state-aligned essential-drug databases (with the ability for PHC staff to update availability), and risk-stratification algorithms that flag high-risk patients and guideline-based referral criteria. The EDSS is explicitly designed as an assistive tool - clinicians retain full override authority to exercise their judgment. Back-end audit trails log every action and decision for monitoring. The assisted telemedicine component has two models: a facility-based model providing real-time specialist consultations within the PHC (via teleconference) and a portable "backpack" model enabling outreach to remote community settings. In both models, nurses or mid-level providers collect structured clinical data and basic investigations prior to the teleconsult, reducing physician cognitive burden. The telemedicine platform integrates electronic health records (EDSS data), point-of-care diagnostics (e.g. glucometer, digital stethoscope), and decision support summaries. Care pathways are defined by SOPs (e.g. which patients qualify for tele-referral, how consultations are scheduled and documented). Quality features include offline scheduling with sync (to avoid cancelled consults), and a PPP-based pool of specialists to improve availability (with defined incentives and schedules). All teleconsult requests and outputs (prescriptions, specialist recommendations) are logged and routed back into the PHC workflow to reinforce continuity of care. Importantly, prescriptions are automatically checked against PHC stock - the system will flag if a specialist-recommended drug is unavailable, minimizing patient out-of-pocket costs. The patient-facing mobile application (the Ai.M Healthy app by ClinAlly) supports MLTC self-management. Core functions include linkage with the national ABHA Health ID (to import health records securely), personalized medication and visit reminders, symptom tracking, and a content library of lifestyle and adherence support. Based on co-design feedback, the app uses audio-visual, low-literacy content (short videos and interactive prompts) in local languages. Users can log self-reported behaviors via simple yes/no/tick inputs, triggering context-specific feedback. The app is "event-triggered" rather than continuously burdensome: notifications occur around clinic visits, medication changes, or scheduled follow-ups. For patients without smartphones, the system falls back on SMS/IVR reminders and engages caregivers or frontline workers (ASHAs/ANMs) to relay key messages. Critically, the app is interoperable with EDSS and telemedicine records - for example, it displays the patient's current care plan and follow-up dates, so reminders align with the PHC's instructions. Overall, the intervention is implemented on a secure, cloud-enabled platform compliant with national digital health standards. Data entry at PHCs and in the patient app is encrypted end-to-end and stored on secure servers. The architecture follows the WHO digital health evaluation framework: it is assessed for technical/infrastructure fit (offline sync, data security, interoperability) and workforce/workflow fit (user interface design aligned with OPD routines). System readiness was confirmed in a prior phase: health facility assessments at 20 PHCs (using IPHS 2022 standards) highlighted gaps which the intervention explicitly addresses (e.g. provision of digital tablets, training on record-keeping). In sum, the digital tools are fully integrated into PHC workflows rather than operating in parallel, with APIs linking EDSS, telemedicine, and patient app data to minimize duplication. * Quality Assurance and Supervision: A robust quality assurance (QA) system is established. Supervision protocols require real-time monitoring of key processes. At each PHC, a designated study coordinator conducts weekly reviews of enrollment logs and EDSS entries to verify completeness. Monthly centralized monitoring by the research center includes data audits: for example, random records are cross-checked between REDCap and EDSS to detect missing or discrepant entries. The EDSS platform automatically generates backend audit trails for every user action. These logs feed into structured fidelity checklists developed from Carroll's framework. Performance indicators (e.g. % of EDSS encounters with all mandatory fields, % of patients referred per protocol) are compiled into dashboards for review. Supervisors observe at least 10 patient encounters per PHC during the pilot to assess "quality of delivery" - e.g. whether MOs appropriately justify any EDSS plan modifications. Third-party oversight is provided by periodic audits from the independent data monitoring committee. They review processes such as consent procedures, data security, and adherence to SOPs. Automated data integrity checks (e.g. range and logic checks in REDCap) flag any out-of-range values or missing data, prompting immediate queries to site staff. Key QA tools and indicators include: SUS and MAUQ questionnaires capturing system usability, Theoretical Framework of Acceptability (TFA) interviews capturing provider attitudes, and Acceptability of Intervention Measure (AIM) surveys of participants. The Carroll fidelity domains provide pre-defined thresholds: e.g. ≥80% of EDSS steps completed per encounter, ≥80% of eligible patients exposed to each component, and ≥75% of sampled encounters rated "high-quality". Any deviations trigger retraining or process corrective action. * Data management and sample size: All quantitative data are collected using secure electronic systems with audit trails. Baseline and survey data are entered into REDCap at point-of-care. EDSS and telemedicine encounter data are logged in DigiSetu with unique participant IDs. The patient app usage data (log-ins, reminder responses) are capture. A single codebook defines all variables across platforms. To minimize missing data, all critical fields are mandatory in the digital forms; research staff are trained to resolve missing items immediately by direct inquiry. At the central office, periodic data checks identify missing or inconsistent values; statistical imputation (e.g. multiple imputation for random missingness) will be applied if needed during analysis to ensure valid inferences. This pilot will enrol approximately 30 participants per PHC (total \~180 participants). This sample size was chosen pragmatically to test implementation processes and provide estimates of uptake/fidelity rather than to power clinical outcomes. Loss-to-follow-up is expected to be low given the 6-month duration; all efforts (e.g. multiple contact methods, community follow-up) will be used to minimize attrition. Recruitment and retention rates will be monitored monthly. * Evaluation and Outcomes: Effectiveness will be assessed through a mixed-methods evaluation framework. Acceptability among providers is measured using the TFA: after completing prescribed tasks, MOs and nurses will undergo think-aloud sessions and semi-structured interviews to probe affective attitude, burden, coherence, perceived effectiveness and self-efficacy. Providers will also complete the 10-item System Usability Scale (SUS) - a validated tool yielding a 0-100 score (target ≥70). Acceptability among patients/caregivers is assessed via in-depth interviews at study end, exploring perceived usefulness, barriers, and satisfaction with the app and care pathway. Usability of the patient app is quantified using the Mobile App Usability Questionnaire (MAUQ), an 18-item instrument with subscales for ease-of-use, information quality, and usefulness. A mean score ≥5.0 (on 1-7 Likert) will be considered acceptable. Additional usability data (navigation patterns, time per session) will be extracted from app analytics. Telemedicine usability will be inferred from completion rates and satisfaction surveys of both PHC staff and specialists. Feasibility is continuously monitored: backend logs are analyzed for intervention exposure (e.g. proportion of eligible patients actually entered in EDSS, completed teleconsults per schedule, app-engaged participants). Key operational metrics include average screening and consultation times, system uptime (frequency of app crashes), and teleconsult success vs dropout rates. These inform real-time troubleshooting and are summarized at 3- and 6-month intervals. The logic of telemedicine referrals is audited: we track the proportion of specialist suggestions that align with EDSS and availability at PHC (to evaluate integration). Evaluation findings (both quantitative metrics and qualitative insights) will be triangulated to refine the intervention. For example, app content will be iterated based on SUS/MAUQ scores and user suggestions, and telemedicine SOPs adjusted if too few referrals occur. An intention-to-treat analytic approach will be used for primary analyses, with sensitivity checks for missing data. A mixed-effects model will be planned (with cluster PHC random effects) for any exploratory impact outcomes (e.g. change in risk factor control), though hypothesis testing is outside this pilot's scope. In sum, this registry entry describes a comprehensive implementation of a co-designed, digital health intervention for MLTC at the PHC level. It emphasizes participatory design, rigorous training, standardized workflows, and embedded evaluation to ensure the intervention is feasible, acceptable, and scalable within existing health systems.
Study Type
OBSERVATIONAL
Enrollment
180
Algorithms were developed for hypertension, diabetes, mental health conditions, respiratory diseases, backache, substance use, and vision and hearing problems. Researchers reviewed national and LMIC guidelines and created flowcharts covering the full care pathway from screening and tests to diagnosis, treatment, referral, and follow-up. After multiple expert reviews, the final flowcharts were converted into structured datasets and workflow variables, forming the basis of the EDSS, which guides health workers step by step in delivering consistent care
Assisted telemedicine refers to a system where participants get help to access teleconsultations through two models. A full Model, where they visit PHCs and connect with remote doctors through telemedicine hubs, and a backpack Model, where portable kits deliver telemedicine services in remote or hard to reach areas. Both models are adapted to local needs to ensure continuous care and better access to specialist consultations.
The patient-facing app enables participants to track key health indicators, receive medication and appointment reminders, and access educational content. Community champions help to develop patient networks to improve disease management and empower them in their self care.
Makavarapalem Primary Health Care Center
Narsīpatnam, Anakapalli, India
Nathavaram Primary Health Care Center
Narsīpatnam, Anakapalli, India
Beru,Primary Health Care Center
Bhopālgarh, Jodupur, India
Pheench, Primary Health Care Center
Phalodi, Jodupur, India
Dadhikot Primary Health Care Center
Bhaktapur, Bagmati, Nepal
Kathmandu, Urban health promotion center
Kathmandu, Bagmati, Nepal
Feasibility of Intervention Implementation
Feasibility will be assessed by successful completion of at least 70% of the planned workflow steps during the pilot phase.
Time frame: Feasibility outcomes will be tracked throughout the study period and summarized at 3 months (endline)
Acceptability of Intervention
Acceptability will be assessed as the proportion of health care providers and participants rating the intervention as acceptable, using the Theoretical Framework of Acceptability (TFA). A mean score of ≥4 on a 5-point Likert scale will indicate good acceptability
Time frame: Acceptability outcomes will be tracked throughout the study period and summarized at 3 months (endline)
Usability of Intervention
Usability will be assessed using the System Usability Scale (SUS), a 10-item questionnaire generating scores from 0 to 100. A SUS score of ≥70 will indicate good usability.
Time frame: Usability will be assessed at 3months (endline) based on post-intervention feedback from healthcare providers and participants
Fidelity of Intervention Implementation
Fidelity will be assessed by adherence to intervention delivery processes and data entry procedures as specified in the approved protocol. Compliance of ≥70% will be considered satisfactory.
Time frame: Fidelity will be monitored continuously using supervision checklists and system logs and summarized at 3months (endline).
Quality of Intervention Delivery
Quality of delivery will be assessed as the proportion of reviewed records demonstrating complete and accurate documentation, with a target of ≥75% indicating acceptable quality
Time frame: Quality of delivery will be monitored continuously through documentation checks, formally assessed monthly, and summarized at 3months (endline)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.