Ketan Rajpal

Education Technology

Ketan Rajpal

Ketan Rajpal

Designed for Learning, Not Just for Looking Good

11 May 2026

Why Student-Centred Design Makes EdTech Work | A Teacher's Guide

There is a particular kind of frustration that teachers know well. A new digital tool arrives — impressive in the demonstration, confidently presented, full of features — and within a week, the students are confused, disengaged, or quietly going around it. The tool works. It just does not work for them.

This is not a technology problem. It is a design problem. And it is one that student-centred design exists to solve.

What Student-Centred Design Actually Means

Student-centred design is not a methodology or a framework. It is a way of asking the right question from the beginning.

Most technology is designed by adults, evaluated by adults, and purchased by adults. The student — the person who will spend hours inside the tool, navigating its menus, reading its instructions, making sense of its logic — often enters the process only after the decision has been made. The result is a product shaped by what adults think learning should look like, not by what students experience when they sit down and try to do it.

Student-centred design reverses that. It starts with the learner. What do they find confusing? Where do they give up? What keeps them coming back? It treats those answers not as feedback to consider later, but as the foundation the product is built on.

That shift is smaller than it sounds. But the difference it makes is not.

Why It Changes Engagement — and Learning

A student who cannot find what they need in the first thirty seconds will not try for sixty. A student who finds an interface disorienting will carry that disorientation into the learning itself. The cognitive effort that should go into understanding the lesson goes instead into understanding the tool.

This is the hidden cost of poor design — not just frustration, but friction that sits between a student and the thing they came to learn. Research into digital learning environments consistently shows that when interfaces are clear, predictable, and built around how students actually think and move, engagement rises. Not because the content changed, but because the path to it did.

The converse is equally true. Schools that invest in technology without involving students in evaluating it often discover the same thing: adoption falls off, workarounds appear, and the original investment sits underused. Not because the students were unwilling, but because no one had asked them whether the tool made sense.

A Three-Step Checklist for Teachers and Buyers

Evaluating any ed-tech product with students at the centre does not require a research team or a lengthy procurement process. It requires three deliberate steps — applied before the purchase, not after.

Step one: listen before you decide.

Before any tool is chosen, put it in front of a small group of students — not to demonstrate it, but to watch them use it. Ask them to complete a realistic task. Do not guide them. Watch where they hesitate, where they scroll without purpose, where they ask for help. Those moments are the data. What they find easy matters too, but what they find hard matters most.

This is not a formal usability study. It is fifteen minutes of honest observation. And it will tell you more than any product demonstration ever will.

Step two: test in the real classroom, not the ideal one.

A tool that works in a quiet pilot with motivated students does not always survive contact with a full class on a Wednesday afternoon. Before committing to a platform, ask whether you can run a genuine trial — different year groups, different subjects, different levels of confidence with technology. The tool that holds up across that range is the one worth choosing.

Pay particular attention to how students who struggle with technology experience it. If the interface works for your most confident learners but leaves others behind, the design is not doing enough.

Step three: iterate on what students tell you.

If the tool is already in use and something is not working, take the feedback seriously. Not as a complaint to manage, but as information to act on. Where possible, share that feedback with the provider. Good ed-tech companies welcome it — they know that the people closest to the product every day are the people who understand it best.

Where direct feedback to providers is not possible, adapt how the tool is introduced. Adjust the onboarding. Simplify the first steps. Small changes to how students encounter a platform can make a significant difference to how they experience it from that point on.

The Checklist

Before evaluating or adopting any ed-tech product, ask:

  • Did we watch students use it before we decided?
  • Did we test it with different kinds of learners, not just our most confident ones
  • Do we have a way to collect and act on student feedback once it is in use?
  • Does the interface respect where students are, or assume where they should be?
  • Could a student make sense of this tool without being told how?

If you can answer yes to most of those questions, you are close to a tool worth trusting. If several of them are uncertain, the design work may not be finished yet — and it is worth finding out before the classroom does.

The Bigger Principle

Technology in education is only as good as the learning it enables. A platform that impresses in a boardroom but confuses in a classroom has not yet done its job. The most important evaluation it will ever face happens not in a demo, but in the quiet moment when a student sits down, opens it for the first time, and decides whether it is worth their attention.

That moment is what student-centred design is built around.

And it is the moment every educator deserves to get right.

#EducationTechnology#EdTechDesign#StudentEngagement#ClassroomInnovation#DigitalLearning#UXDesign#TeacherResources
ReactNext.jsTypeScriptJavaScriptTailwind CSSMaterial UIVue.jsHTML5CSS3SCSSNode.jsPythonDjangoExpressFlaskREST APIstRPCGraphQLGoogle GeminiApplication-embedded LLMsAI Agent DesignTool-calling WorkflowsAgentic PipelinesAutomated Content & Data PipelinesPydantic Schema ValidationCelery-based Async OrchestrationPostgreSQLMySQLMongoDBSQLitePrismaRedisPandasNumPySchema-driven API DesignOAuth 2.0JWTIron SessionRBACAES EncryptionCSRF ProtectionAudit-safe ArchitecturesGovernment-grade Security StandardsAWSEC2LambdaS3Elastic BeanstalkSESMicrosoft AzureApplication InsightsGoogle CloudDockerNginxGunicornCI/CD PipelinesGitHub ActionsCode Review PracticesGit-based WorkflowsProduction MonitoringSentryAzure Application InsightsReliability-first EngineeringUX JourneysWireframingPrototypingAdobe XDAdobe Creative SuiteLegal TechnologyKPMG HighQM&A PlatformsEnterprise Legal WorkflowsEducation TechnologyAdmissions SystemsVLEsAssessment & Proctoring Platforms