Here to demo the website? Reach out to me at [email protected] or on LinkedIn for more information.
After having spent the past while working in a React/Next stack, I decided I had my feet underneath me enough to try learning how to use Vue & Nuxt. I had heard it described as an up and coming competitor in the space that React and Next dominate. With that in mind, I needed a small project to use as the whetstone upon which I'd sharpen my skills.
Not too long ago, I had decided to create a small WPF app in order to learn how desktop apps can be made, how the Windows notifications API works, and how people make use of the system tray in the taskbar. For this project, I made a simple GUI that allows the user to open, pause, and close a workday. The app shows you how long you've been working and allows you to enable periodic desktop notifications reminding you to take a break.
Like most first attempts, the app wasn't perfect. The time calculations were hacked together. The notifications would sometimes send back-to-back, nonstop until you disabled them in the Windows settings, and it didn't show as much information as I'd have liked to display. These imperfections, combined with the relatively small scope of the project, led me to select it as the basis of my Nuxt project. Because I had implemented a version of the baseline features, it allowed me some spare innovation tokens to spend on some new ones:
- A user can open and close workdays.
- A user can pause their workday (e.g. to go on break).
- Information about the workday is saved to persistent storage.
- A user can see information about their current or last workday, depending on if their workday is currently opened or closed.
- A user can activate and deactivate periodic push notifications about their workday.
- A workday timeline that shows the user relevant data to aid them in planning their day/week.
- A secure environment that only the user can access.
- A slick, well put-together UI.
- Edit, view, and export workday data.
- Live updates for all clients when the workday status changes.
I needed to make some decisions about what supporting technologies would help me bring this project to life quickly and without abstracting away the details of that which I was trying to learn.
Of course, this whole thing kicked off because I wanted to learn Vue. The .vue file syntax felt very alien to me. Given this is where the whole project began, it's probably the weakest part of it all, as I was reading docs and diving into unfamiliar concepts for the first time. I've had to go back and rework several of my components as the concepts slowly cemented themselves in my mind. Syntax and approaches I had used in previous project constantly threw bugs in my face because, it turns out Vue had designed systems like composables, plugins, and reactivity to handle the tasks and I was short circuiting their capabilities by not using them correctly.
I didn't want to spend weeks putting together components that would ultimately end up being subpar to the ones that big teams full of very experienced developers had already done. I narrowed down my choices to shadcn-vue, PrimeVue, and Quasar. I don't remember why I didn't choose Quasar, but my first pick was shadcn, because it gave the developer a lot of flexibility to use the components exactly the way they see fit. Unfortunately, it was a little too much flexibility for my use, because I got too bogged down in the details trying to get things looking right. This left PrimeVue. They provided a lot of useful components, and the implementation went smoothly, but I later discovered that PrimeVue v4 introduced some FOUC when used with server-side rendering (read: when being used with Nuxt). I was in too deep to switch libraries again, so I accepted it for now with the hopes that the team will push out a fix in a future update.
PrimeVue ships with the pretty clean-looking Aura theme to hit the ground running and also provides some plug-n-play iconography that I found useful. I augmented PrimeVue with Tailwind utility classes that make responsive design a breeze. Since dark mode is all the rage these days, I built a toggler using @nuxtjs/color-mode with a little tweaking to work with tailwind and PrimeVue.
Finally, I needed a good font for my digital stopwatch. At first, I used Doto because I didn't know much about serving my own font files, and this had styling that most closely matched a 7-segment display I could find at Google Fonts. Once I had a better understanding of using fonts, I landed on Keshikan's DSEG font family. He also gave me the idea for the blinking colons and numeric background.
The orchestrator of all the bits and pieces that make up this project is Nuxt. One of the main selling points of Nuxt is their auto-import feature, which means you'll almost never type import { x } from 'y'; again. I must admit I had a healthy distrust of the capabilities of this feature, and I spent far too much time trying to interfere with it, but when I left it to its own devices, everything just showed up where it should.
Beyond that, they provide some pretty nifty dev tools, modules, an opinionated project file structure, and more nice-to-haves that got me up and running pretty quickly. It also uses "blazingly-fast" Vite for hot module reloading in development and a nice, clean build for deployment. Rounding it out is Nitro server to do the serving quick, fast, and in a hurry.
One last note on Nuxt: all of its configuration lives in one file. When you add in other packages via a Nuxt module, it allows you to configure it in the Nuxt config instead of the package's usual config method. This is useful for keeping it centralized, but I also found that for some tools, such as tailwind, I had more success using the package itself rather than the Nuxt module version of it. I think that it often has to do with the module maintainer's ability to keep up on its dependencies, especially when they are evolving as often as they are. For example, when using vue-query-nuxt, I found myself running into some strange errors that I was able to resolve by reverting to using the Vue-Query package itself. Admittedly, this is when I was early into the project, so it is very possible the problem was that I was misusing it as I did not fully understand the workings of Nuxt.
"Real" databases take a bit of work to stand up and maintain. The small scale of my project meant I got to skip all that and use SQLite, which puts the whole database in a couple little files. The actual implementation of that was made even easier with the excellent better-sqlite3 library. Using Vue query, all parts of the platform could access and update the data through an API endpoint from a centralized location that also provided some very neat caching and other optimizing abilities.
To make the database a little easier to work with in Typescript, I chose Drizzle for my ORM. I originally started the project without it, so when I integrated it, the code clarity improved drastically and allowed me to type my function signatures to match the types representing the database objects. Now, if the underlying schema changes, I will see how those changes impact the performance of its use across the codebase in the way of type errors.
An interesting implementation detail I had to tackle was how I would persist a user's site settings across sessions. One approach is to save their settings to a table in the database. However, this meant that their settings would be the same across all devices they used to access the platform. I wanted to let the user configure their site settings by device. It gave me a great opportunity to explore the Window API's local storage feature. I used it to store the settings on the browser in a manner that would stick around until the user cleared their cache.
When my data needs grow too complex for SQLite, I'd like to migrate to a PostgreSQL database, possibly implemented with a self-hosted Supabase instance.
Did you know browsers can't run Typescript files? I didn't. Or maybe, I knew subconsciously or indirectly, since I knew that my project is bundled into minified Javascript files before being served to the client. It didn't really hit me until I had to install a service worker on the client to enable push notifications.
I wanted users to have the option to enable periodic notifications that remind them of how long their workday has been open, even when they didn't have the website open in the browser. To send the notification from the server, I accessed the Web Push Protocol via the web-push library. To display the notification, the browser gives me the Push API. These two technologies made the implementation relatively painless.
The interesting part is the "even when they didn't have the website open in the browser" part. To do this, I had to write code to register (install) a Javascript file (service worker) that would run on the browser in a separate thread to listen to the server for incoming notifications and display them to the user. That file was a bit of a challenge to debug, because its console logs don't go to the normal client console and the worker decides when to run once it is registered. Firefox has additional dev tools that show the registered workers, and you can use that to open a console specific to a service worker, but only when it is running.
A feature that was not initially on my roadmap, but snuck its way onto it when I realized it was possible, was refreshing the workday data across all clients when one client updates it. What I observed was that I could go to the website on my phone and my desktop at the same time, stop or start the workday on my phone, and the desktop would appear as if nothing had changed.
Enter: websockets.
Traditionally, the only time the server talks to a client is when the client sends a request. A websocket is basically a two-way persisting connection between a client and a server. This means the server can initiate a request to give the client some new information. The technology apparently has some caveats and implementation details that are all nicely abstracted away by socket.io. I applied this capability to advise all of the server's websocket-connected clients that it has new information for them.
When I first dreamt up the idea of showing a live summary of the workday, I had a few design requirements in my head:
- Represent the active and paused segments of the workday.
- Represent when the user is working past the estimated 8 hour workday.
- Label the start, current, and estimated end points on the line.
Trying to stick with my existing libraries, I looked at Primevue's Timeline first, but the points were evenly spaced, and you could not change the spacing to show relative differences in segment lengths. Then I thought, "What if I used a line chart with points whose y-value is constant?" That would get me a straight line with varying spacing between the points. I would just need to find a way to show the line without showing the chart. Research led me to Chart.js, but it seemed like too much of the implementation was abstracted away for me to extract the line from the line chart. It's a very "batteries included" package where the customization is limited to the methods provided by the package.
I landed on D3.js. There's a lot of mixed emotions about this older package, but it felt like a box of Legos you could use to piece together some really unique builds. The library was designed during a time before front-end frameworks really embraced reactivity and virtual DOMs, so it took a little haranguing to get it working right in Vue. I'm not convinced my implementation is the most efficient or cleanest, but it sure does look nifty and perfectly achieves the design requirements I set out with.
Man, oh man. I was pretty confident this would be a challenging thing to implement. Even still it was harder than I anticipated. My basic idea was to use PrimeVue's DataTable component to display an interactive table where the user could filter, sort, edit, and delete data about their workday history. I started with some sample data loaded into memory and did these operations solely on the front-end just to get the component in place without worrying about the complexities of the back-end logic. Then, I hooked up the features to a back-end API endpoint one by one: fetching the live data, editing cells, pagination and sorting, deleting rows, and finally filtering.
I'm not convinced this was the best way to go about this implementation. I think I ended up with cluttered, inefficient and disorganized code. This is because the DataTable component's front-end implementation of table operations is very powerful and someone spent a lot of time thinking about how to make it great. However, converting it to back-end table operations was not always a 1:1 match.
By far the most difficult aspect was managing the shape of the data between the front-end, back-end, and database:
- The front-end displays start and end times, which are not columns in the database.
- The application uses Date objects, but fetch calls convert them into strings.
- The back-end had to interpret the generated start and end times to pull the correct data from the start and end date columns in a way that is both sortable and filterable.
- The entire application needs to properly handle requests coming from clients in any timezone.
So, I did the best that I could to overcome the challenges and did, however clumsily that turned out to be. I started getting major project fatigue towards the end of it, and I had to remember why I did it in the first place: to learn about some web technologies I hadn't yet experienced. No one is paying me to do it, I'm the only user of the platform, and I'm not working on it full-time to get it anywhere close to perfect.
In today's internet, a website won't survive very long without some protection. Security is hard, one poorly written line of code and suddenly some bad actor has your uranium enrichment facility on the fritz or you're leaking everyone's DNA all over the internet¹. This is why I've entrusted my site's security to nuxt-auth-utils and nuxt-security. This is not a project to explore secure design practices, and the data stored on it is not particularly sensitive, so I wasn't going to spend a lot of time on it. But I also didn't want to put it all out there without some sort of defense. To the white hat folks out there, please email me if you find something egregiously wrong with my website's security.
¹ Yes, this is a gross oversimplification of how these two incidents occurred, but the stories are sensational and highight the importance of good security practices.
Javascript doesn't really mind how you use it. It's very powerful and lets you get away with some pretty crazy stuff. The downside is that the code you write can start to behave erratically if you sloppily write something that seems to work, but is built out of bubblegum and popsicle sticks. Typically, this is more of a concern for large, team-based projects, but I wanted to enforce some kind of standard on myself.
Typescript introduces build-time type safety and a level of self-documentation to Javascript. Read their article justifying their existence to understand why this is important.
Biome.js keeps me on the straight and narrow by calling me out for poorly designed code via its linting rules. It's formatter also keeps my files looking sharp (although, it annoyingly doesn't fully support .vue and tailwind .css files).
Knip keeps my project from getting bloated with unused dependencies and dead code. However, it seems to get pretty confused by Nuxt's auto-import feature, and they even suggest you disable it to get the best results.
This one goes out to my pals node.js and npm, the unsung heroes that make programming in Javascript a team sport. When one person writes great software, all it takes is a quick npm i and we all benefit from it.
Ha, who has time for that? We have features to push!
In all seriousness, this is very important and I should get around to it sooner rather than later so that the towering stack of cards that represents my interconnected code doesn't come crashing to the ground when I make one small change.
This is my first-ever website hosted on the real Internet whose deployment details aren't abstracted away by GitHub pages or PythonAnywhere. While these services were instrumental to my journey as a web developer, it was time to put on my big boy pants and figure it out myself.
The first thing I needed was a domain to legitimize my website and give users a web address to access it. Porkbun very happily sold me two (I only needed one, but at $12 each I couldn't resist). They also gave me an easy way to tell the rest of the internet that my website lives here.
Next, I needed to put my project on a server that could handle the level of traffic I anticipated (me and a few of my friends). This project also has the particular limitation of using SQLite. Since SQLite stores data in a file within the project, it did not qualify for serverless and auto scaling services that dynamically spin up new instances as web traffic increases. By their very nature, these services would delete my web app and its SQLite file with it without me even knowing. This lead me to a Virtual Private Server provided by Racknerd. I chose them because they were arguably the cheapest service that did not seem to sacrifice on quality, but Hetzner came in as a close second.
Once I was equipped with a VPS, I used Caddy to accept inbound requests and shuffle them off to the relevant destination (this web app), and PM2 to keep the web app running as a long running process.
To get the project to the server, I used the Git CLI to clone the repo from GitHub over SSH. This is a very manual process and I am desperately in need of a good CI pipeline, once I figure out how to do all that.
This document you are viewing right now is actually my project's README.md. I used this cool library called showdown to convert the Markdown file into HTML that I can render on the website. I wanted to provide some information about the project in the repository and on the website without having to maintain it in two places. This is my answer.
- The workday stopwatch that shows:
- Current time spent working.
- Time spent on last workday.
- User flow to start, stop, pause, and unpause workday.
- Workday data persistence through SQLite at an API endpoint.
- Data retrieval and caching through Vue-query.
- Theming, styling, fonts, and responsive design.
- Workday push notifications to clients through:
- Service workers to run code on the client,
-
web-pushto send notifications from the server and, - Browser Push API to display notifications to the user.
- Client-based persistent notifications settings through local storage API.
- Very basic auth through
nuxt-auth-utilsto protect website and API endpoints.
- Fully type the project and correct type errors.
-
Turn off Nuxt auto-imports.There is too much involved in this to make it viable for a personal project at this stage. - Write comprehensive test suites (one day).
- Create a CI/CD pipeline.
- Upgrade the push notification to take you to the site when you click it.
- Dark mode 😎
- Live workday status updates via
socket.ioand websockets. - implement nuxt-api-shield
- Workday Timeline feature.
-
Probably use Chart.js line chart but without the chart.Actually ended up with a D3.js line chart
-
- Migrate from SQLite to a PostgreSQL database.
- Workday data editor.
- Implemented through the PrimeVue DataTable component.
- download data as csv button.
- Workday data dashboard.
- How to use Primevue Toast outside of a component
- How to make a push notification service worker
- How to see console logs for service worker
- HTTP response status codes
- How to do basic Nuxt auth
- Putting a Nuxt3 app on a VPS
- Vue plugins vs. composables vs. stores
- Good practices and Design Patterns for Vue Composables
- Common push notification patterns
- A very cool 7-segment font
- How to install DNS provider modules in Caddy 2
- How to use D3.js in Vue
- Literally every doc for any library I used
- Google's Gemini and Phind's 70b AIs (I won't hide it)
- My friend, M***h
See CONTRIBUTING.md.