Follow my listening habits... live!

22/05/2020

Have you ever thought, you know, I wonder what Brett Jenkins is listening to right this very second? Of course you have...

So this is one of those things that I did, just because I can, rather than to solve any particular purpose. I thought I'd do a write up on how I've personally approached it, and tell you a little about how my home automation system works along the way. 

So I've been using Home Assistant now for a number of years, before that I used openHAB, I switched because openHAB made a lot of changes moving to v2 which was going to cause me to make a bunch of changes, so I thought I'd use the opportunity to try the new kid on the block, Home Assistant, which I still use to this day, running on a docker container in my Linux docker host vm in the garage. 

So Home Assistant talks to all my home devices, including my sonos devices. For my automations I use a program called NodeRed, which is amazing. It provides a drag and drop experience which actually is extremely powerful, far more powerful in my experience than Home Assistants YAML based automations, as you can easily write complex logic by making loops, writing javascript, creating subflows (like methods / functions) that can be reused elsewhere etc. 

So back to the topic at hand. What I do is when Home Assistant registers a change in music on my office sonos, a NodeRed automation runs and fires off a REST put request to this server, hosted on AWS. This site is using Umbraco and using my native language of C#, so it's trivial to add a WebAPI controller to take that request (limited to my IP for security), which is an JSON object, parse it to a C# object, and then I store that value in the runtime cache. Currently this site is running on a single server, and I don't care about persistence if the site crashes, so it's the perfect solution for now. Of course, if I ever decide to run a couple of servers, I'll have to move it either to a database or a 2nd level cache, but I'll cross that bridge when I come to it. 

Then on the UI side, I'm using vue to render the markup and I use SignalR in order to push the changes to the browser (it will send the currently playing song on connection), SignalR is so fast that I find that sometimes the text changes on the website before the first note has played on the song! It's also so much nicer and more efficient having the server push the changes rather than polling for changes every X seconds.

I had a slight problem in that I wanted the vue component to output different markup in different places depending on mobile/desktop. At first I attempted to tackle this problem with the age old user agent sniffing technique, which is always rubbish. Luckily I found vue-mq, which uses media queries, the same way you can do in CSS, creating truly responsive vue components!

Simples. 

It was a good exercise. Even though I use Vue daily in my day job, I've never personally had to set it up and integrate it into an existing website, using webpack and typescript which was made harder by npm installing beta versions of software which broke everything slightly smiling face, but got there in the end, and it was a great learning experience. I also enjoyed working with SignalR, which I've never had a chance to play with before, and loved how easy it was to use and how efficient it is.

Anyways, I digress. This was an interesting little project that has added a rather pointless feature to my website, my partner likes it though, so can't be all bad!