Niki Yaw
vancouver, bc
ABOUT ME
Data engineer actively transitioning into full-stack development.
Learning bits & pieces by building.
PRODUCTS
-
gym-ie
no idea what to do in the gym? gym-ie tells you what to do.
-
timer
keep track of your time with clarity.
PROJECTS
-
Wellness Tracker API
A RESTful backend service I developed to serve as the core of a personal wellness application. It functions as a digital journal, allowing users to securely track and manage their habits, workouts, and recipes.
-
Asynchronous Task Processing Service
A backend system for handling long-running or resource-intensive tasks outside the request/response cycle. Users can submit jobs through an API or dashboard, process them asynchronously with Celery workers, and monitor progress in real time.
BIWEEKLY NOTES
-
14 Sept 2025
It's amazing how a tiny missing import can eat up an entire weekend. But that's the story of my first two-week project: the Wellness Tracker API. While the end result is a foundational backend for tracking habits, the real story is in the unexpected lessons learned and the bugs I battled along the way.
The Starting Line: Why FastAPI and PostgreSQL?
My goal was to learn modern, production-grade tools. I chose FastAPI for its performance and developer-friendly setup, making it the perfect entry point for my first full-stack API. For the database, I went with PostgreSQL, an industry-standard, robust, and open-source option. Along the way, I also learned a lot about project structure and how to organize my files, a skill I didn't get to fully practice in school or at work.
The Hurdles: The Bug That Almost Won
The real challenge came with getting Alembic and SQLAlchemy to play nice. While SQLAlchemy itself made sense to me as a way to manage the database in Python, the migration process with Alembic was a different beast entirely.
I hit a major roadblock trying to create my users and habits tables. Every time I ran the migration, the tables just wouldn't appear in my remote PostgreSQL database. I was completely stumped. My initial troubleshooting was a frantic process of elimination: I checked the database URL for typos, tinkered with psycopg versions, and even deleted the entire Render database service to start from scratch. Nothing worked. The error was always the same: users table not found.
The "A-ha!" Moment
After countless hours, I finally found the problem. A single, missing import statement! Alembic couldn't "see" the SQLAlchemy schemas I had defined for my tables, so it couldn't generate the correct code in the migration script. The schemas existed, but Alembic didn't know they did! Once I fixed that one line, the scripts generated as they should, and the tables were created instantly. It was a massive learning moment about how these tools interact under the hood.
I also ran into some interesting challenges with GitHub Actions. While my test scripts ran perfectly locally using a SQLite database, they failed in the cloud environment. This forced me to rethink my testing strategy and adapt the pipeline for different environments, a valuable lesson in building for the real world.
What's Next?
I'm thrilled with the progress and the lessons learned. The API now supports user authentication and basic habit tracking, which is perfect for my own fitness journey. I'm excited to continue building on this foundation. Here's what's next on the roadmap:
- Expanded Features: Adding dedicated endpoints for workouts and recipes.
- Security & Scalability: Implementing rate-limiting to protect against malicious behavior.
- Data Aggregation: Creating a way to analyze user data, such as a weekly summary of completed habits.
-
28 Sept 2025
For my second backend project, I took on something a bit more advanced than my last API: building an Asynchronous Job Queue Service. Instead of just making endpoints and connecting them to a database, this project pushed me into new territory—distributed task queues, real-time monitoring, and structuring a more complex backend.
Why this project?
Honestly, this one wasn’t even my idea—it came as a recommendation when I said I wanted to go deeper into backend development. And it turned out to be the perfect next step. Unlike my first project (which was mostly CRUD endpoints), this one introduced me to how production systems handle long-running, resource-intensive tasks without blocking the main application. I got hands-on with Celery and Redis, two industry-standard tools for asynchronous processing, and started to see how these pieces fit into larger systems.
The Hurdles: Strict Typing Everywhere
I grasped Redis and Celery pretty quickly—they make sense once you see them in action. But the real challenge came when I tried to tighten up the project with typing improvements. It forced me to think carefully about how data flows across the whole pipeline: API input schemas, Pydantic models, database columns, and task definitions. Everything had to line up perfectly, or mypy would complain. Debugging those mismatches was frustrating at first, but it taught me a lot about consistency and the importance of strong typing in catching errors early.
What I picked up along the way
There wasn’t a single big “A-ha!” moment this time, but instead a lot of smaller lessons. I started to understand how to properly organize backend components: having centralized configs for settings, database connections, Celery, and logging, then wiring them into different modules cleanly. It felt less like hacking things together and more like designing a real system.
The fun part
Even though this was a backend-heavy project, I had fun playing with the frontend dashboard. Watching jobs appear, update in real-time, and change status as workers picked them up made everything feel tangible. It’s the kind of feedback loop that makes backend systems exciting—suddenly all those moving parts click together, and you can see it working live.
What's Next?
Now that the basics are in place, I see a lot of directions this project could grow:
- Error handling & retries: building smarter workflows for failed jobs.
- Authentication & multi-user support: letting different users manage their own queues.
- Deployment to the cloud: running this on GCP/AWS with monitoring dashboards.
- More complex tasks: chaining jobs, scheduling, or even supporting different job types.
REACH
connect on GitHub · LinkedIn — or send me an email at nikiyql@gmail.com