Libriax
A behavioural science–driven reading platform focused on habit formation and long-term engagement.
Overview
Libriax started as a simple idea: I wanted a dedicated reading tracker that I could fully customise, both in features and in design. It gradually expanded into something more ambitious: a platform where reading behaviour can be measured, reflected on, and nudged through thoughtful analytics.

Platform
Behavioural science motivation
The project became a behavioural science experiment in practice: how can design encourage sustained reading? I focused on mechanisms that influence engagement over time, such as streaks, habit formation cues, gamified progression, leaderboards, and achievements, aiming to make consistency feel rewarding without turning reading into noise.

Streaks
Analytics
A core goal was to go beyond simple book lists and short-term targets. Libriax emphasises long-term patterns: yearly views, reading time trends, and personal breakdowns that help users notice what they actually do over months, not what they intended to do over a week.

Stats
Goals and UX
I designed the goal system to feel like guidance rather than pressure. The aim was to make progress legible and adjustable, with a UI that supports motivation without punishing missed days. This meant iterating heavily on small UX details that affect whether people actually return.

Goals
AI integration
As AI tools accelerated, I wanted to understand what they are useful for and what they are not. I integrated Microsoft’s Phi-3 as an LLM feature inside the product, and in the process learnt practical prompt engineering, failure modes, and why retrieval, constraints, and clear product boundaries matter more than “chat” alone.

Gamify
Backend, data, and security
Building Libriax forced me to take backend fundamentals seriously: authentication, session handling, secure routes, and the practical risks of state-changing requests. On the data side, I designed the database schema from scratch and moved from SQLite to PostgreSQL, learning how query patterns, indexes, and data integrity constraints shape both performance and reliability over time.

Launch
Lessons learnt
The biggest takeaway is that most problems are “already solved” in the ecosystem, but that does not remove the need to understand them. Using the right libraries and patterns matters, yet the real progress came from debugging, making mistakes, and learning what breaks in real usage, especially when building something with users and long-lived data.
Built with
- Python
- Flask
- PostgreSQL
- JavaScript
- HTML
- CSS
- Vercel
- GitHub
More projects