
I spent seven years at Pearson, working on digital learning products for the UK secondary market. Pearson is the largest education publisher in the world, and the products are genuinely good for what they are: comprehensive, curriculum-aligned, well-tested at scale. The reason I left is not that the products were bad. It's that Pearson — like all large publishers — optimises for what sells to the procurement process, not what teachers actually use during lessons.
I became a school governor at a state primary in Southwark in 2021. I'd moved back to London, my daughter was starting Year 1, and someone at the school suggested I join the governing board. I said yes without fully understanding what it involved.
What I Saw as a Governor
Governing boards review school data, approve the school improvement plan, scrutinise spending, and hold the headteacher to account. In practice, a significant portion of every governing board meeting involves looking at pupil attainment data and discussing what the school is doing about gaps.
The school I governed had 420 pupils across a seven-form intake. Around 35% of pupils were eligible for pupil premium. Maths attainment at KS2 was consistently below the national average, and the maths lead — a dedicated, experienced teacher with 14 years in primary education — was doing everything right in terms of professional practice. She attended every relevant CPD. She implemented the mastery approach with care. She used her PPA time to track individual pupil progress.
She was also using four separate digital tools across the school: one for times tables practice (Times Tables Rock Stars), one for arithmetic assessments (a homegrown spreadsheet), one for tracking curriculum coverage (a commercial MIS add-on), and one for intervention planning (a Google Sheet she'd built herself). None of them talked to each other. Every half-term, she spent three evenings collating data manually from these four sources to produce the progress reports used in pupil progress meetings.
I asked her how long she'd been doing this. She said it had been the same setup for four years. I asked why she hadn't changed it. She said there was nothing better available at a price the school could afford, and that every time she'd trialled something new it had required a month of setup and a term of chasing teachers to use it consistently. The switching cost was too high to justify change without certainty that the new thing was substantially better.
The Procurement Problem
When I went back to thinking about this from the product perspective I'd developed at Pearson, a specific problem became clear: primary school ed-tech procurement is dominated by headteachers and business managers who evaluate tools based on feature lists, not by classroom teachers who evaluate them based on what it's like to actually use them on a Tuesday morning.
This is not a criticism of headteachers. They're making complex decisions about limited budgets without perfect information. But it creates an incentive structure that rewards products that look comprehensive in a demo and punishes products that are genuinely simple and fast-to-use but appear thin on features in a 30-minute sales call.
The maths lead at my school had tried three "comprehensive" platform suites over four years. None of them got used consistently by teachers beyond the first term, because consistent use required teachers to allocate significant preparation time, navigate complex interfaces, and manage content libraries rather than just teach. The platforms were powerful. They were also, for the majority of primary teachers, more work than they were worth.
What I Decided to Build
I left Pearson in early 2023 with a specific product hypothesis: a maths practice platform for UK primary schools that is genuinely simpler to use than any existing alternative, that gives teachers meaningful data about their pupils without requiring them to navigate a complex analytics suite, and that is priced at a level that a school spending £10,000–£15,000 on ed-tech annually can afford as a line item without a committee decision.
I had no co-founder, no development team, and no funding. I did have a maths education research background (I'd studied psychology and cognitive science before moving into publishing), a network of primary school contacts from three years as a governor, and a reasonably clear picture of what the product needed to do and — critically — what it did not need to do.
The first version, built with a freelance developer over eight months, was genuinely basic: multiplication and addition practice for Years 3 and 4, a live session view for teachers, and a weekly progress email. No curriculum mapping, no lesson library, no games, no avatars. Every feature request I received from the first four pilot schools I rejected unless it was something the maths lead specifically said she would use every week.
What Surprised Me
The first surprise was how much teachers valued speed over features. The metric that came up most often in early feedback sessions wasn't "does it cover all the curriculum content?" or "can I export to our MIS?" — it was "how long does it take to get a class started?" Teachers who could have 28 students working independently within 3 minutes from the moment they opened the platform were dramatically more likely to use it three times a week than teachers whose onboarding involved 8 minutes of setup. Those 5 minutes are the difference between a platform being used consistently and being used "when there's time."
The second surprise was the depth of feeling about data complexity. I expected teachers to want more data. They wanted less — but more actionable. A list of 15 metrics per student is less useful than a single highlighted recommendation: "these three students are consistently missing the 7× table — consider a 10-minute small group session before Friday's lesson." The shift from data presentation to recommendation changed teacher engagement with the dashboard more than any other product change we made.
The third surprise was how much the Multiplication Tables Check shaped teacher priorities. The MTC is the single national assessment that creates external accountability pressure specifically around the content we cover. Several teachers told me, unprompted, that they'd been looking for something that would help their Year 4 students specifically with MTC preparation. That's a narrow but extremely concrete use case that gives the product a clear value proposition: it improves MTC scores. We can measure that, and increasingly, we can show the data that demonstrates it.
Where We Are Now
Everybody Counts launched commercially in September 2023. By the end of our first academic year, we had 12 schools on the platform, 340 students in our formal pilot study, and a set of early results that confirmed the core product hypothesis — not perfectly, but well enough to justify continuing.
The £800K Pre-Seed round from Fuel Ventures, announced in March 2025, changes the trajectory. We can now hire a full-time curriculum lead, expand the question library beyond multiplication and addition, build the Year 1 and Year 2 content that schools have been asking for, and run a larger, more rigorous efficacy study through the 2025–26 academic year.
What won't change: the product philosophy. Every feature decision starts with the same question the maths lead at my Southwark school would ask: "Will I actually use this every week?" If the honest answer is "probably not," the feature doesn't ship. The education publishing industry is full of products that are impressive to buy and exhausting to use. There's space for something different.