Here’s a small deep dive into how I automated my now page on my personal website using Golang and GitHub Actions.
First of all, you might be wondering what a now page actually is. Most websites have an ‘about’ page which explains the background of a certain individual or a business.
A ‘now’ page is a page that tells you what a person is focused on currently. Now pages have become more popular on personal websites and you can find several people with now pages on nownownow (including my own).
So a while ago I added a now page to share what I was up to, what I’m currently learning, some fitness goals, places I’ve travelled to this year and in the last year as well as some stats around what media (books, movies, TV shows and video games) I’ve recently been reading/watching/playing.
Wanting to share all of this information led to one issue…
Manual updates are difficult
My website is a static website built using Hugo. That means to make a change to the now page, I’d need to manually update the now file, commit the new changes made to GitHub and then merge the new changes into my main branch on GitHub.
Doing this once or twice wasn’t so bad, but with how frequently I needed to update the media section of my now page, the process started getting repetitive and time-consuming.
So I thought, what if I could automate the now page? 🚀
Inspiration
Before getting started, I had a look around online to see if anyone else had managed to do something like this and if they had open-sourced their code.
I found that another blogger/software engineer, Robb Knight had already made a post, ��Automating My Now Page’ that was trying to solve the exact problem I had with my now page.
Robb’s solution was to use other services to track what he was reading, watching, listening to and playing, fetching that data periodically and then formatting and using that data in a way that was useful for his now page.
This made me realise I was going to need a good ‘source’ or service which would allow me to track these things. Exploring this, led me to another blog post, this time written by Sophie Koonin about how ‘Everything should have an API’ which helped me find some services that I could use to track some of my media consumption.
One key difference I found with Robb and Sophie’s websites compared to my own was that theirs were made using another static site generator called ‘Eleventy’ which was made with JavaScript compared to Hugo’s Go codebase. I wanted to work on my Golang scripting skills so I realised pretty early on that I was going to have to start this project from scratch if I wanted to write a Go script to automate my now page.
Before doing so, I ended up searching for some examples of how others had used Go scripts to automate stuff and came across a great post by Victoria Drake on how you could update your GitHub Profile README using Golang. This tutorial led me to discover a handy little Go package called, ‘gofeed’ that can be used to read RSS feeds in Go. Victoria also covered how to run the Go script on a schedule using GitHub Actions in the later part of the tutorial.
To put my understanding to the test, I ended up following the tutorial to automate my own GitHub Profile README by pulling in the latest posts from my websites using RSS. I added some extra parts to the script and ended up with a fantastic README that updated automatically.
Armed with some confidence from this mini side-project, I decided to start looking into the best way of sourcing the data for each media type I wanted to track…
Automating!
Given that the gofeed package made things super simple to fetch data using RSS feeds, I decided to first look into services that offered RSS feeds for tracking things. This search eventually led me to a service for tracking movies…
Movies
To track movies I’d watched, I first looked into a free service that I knew a few of my friends were already using, Letterboxd. Immediately, I managed to spot that profiles on Letterboxd had a small RSS icon at the end of the sub-menu. Clicking this revealed an RSS URL that contained a list of movies the user had recently watched. After doing a bit of testing, I found the only way to add items to this RSS feed is to click the ‘Review or log’ button on a movie, rather than just marking it as ‘watched’.
Once I figured out how to add movies to the RSS feed, I wrote a function to return a slice of elements (specifically of gofeed.Item’s) from the RSS feed via the gofeed package.
func getLetterboxdItems(input string) ([]gofeed.Item, error) {
var items []gofeed.Item
feedParser := gofeed.NewParser()
feed, err := feedParser.ParseURL(input)
if err != nil {
return nil, err
}
for _, item := range feed.Items {
items = append(items, *item)
}
return items, nil
}
Obtaining titles and URLs
After getting back the slice of elements from this function, the only data I needed from each element in the slice was the movie’s title and the URL of the movie’s page on the Letterboxd website.
I wrote a function to loop through the elements of the slice and store the title and URL of the element in a new map, which was then added to a slice of maps.
func latestItems(items []gofeed.Item, count int) []map[string]string {
var itemSlice []map[string]string
for i := 0; i < count; i++ {
item := make(map[string]string)
item["title"] = items[i].Title
item["url"] = items[i].Link
itemSlice = append(itemSlice, item)
}
return itemSlice
}
Parsing titles and URLs
While I was able to get the title and URL for each movie, I noticed that some issues with both would prevent me from being able to use the data on my now page.
The movie title field would contain the movie name followed by a comma and the year that the movie was released. Additionally, if a user reviewed or rated the movie, then the stars were also included in the title. E.g. If a movie with the title ‘Deadpool & Wolverine’, released in 2024, was logged on a user’s Letterboxd page with a 5-star review then the title for the RSS feed item would be ‘Deadpool & Wolverine, 2024 - ★★★★★’.
I wanted to omit the comma followed by the year and optional star rating, so I created a new function to parse these parts out from the title using regex (regex pattern was created with the help of ChatGPT).
const movieTitlePattern = `, (\d{4})(?: - ?[★]{0,5}(½)?)?$`
func GetMovieTitle(input string) string {
// Removes ', YYYY - ★★★★' from movie titles
// The regex pattern looks for the following in a movie title:
// - `, 2020` (No rating given)
// - `, 2020 - ★★★★` (rating given)
re := regexp.MustCompile(movieTitlePattern)
title := re.Split(input, -1)
return title[0]
}
To ensure that my regex was correct, I ended up also writing a unit test (using the testify package) with a table of test cases of movie title variations to check the regex extracted just the title of the movie.
func TestGetMovieTitle(t *testing.T) {
tests := []struct {
title string
expected string
}{
{"Movie Title, 2024", "Movie Title"},
{"Movie Title, the sequel, 2023 - ★★★★★", "Movie Title, the sequel"},
{"Movie - Title, 2022 - ★★★★", "Movie - Title"},
{"Movie Title and the movie title, 2021 - ★★★", "Movie Title and the movie title"},
{"Movie, Title, 2020 - ★★", "Movie, Title"},
{"Movie, - Title, 2019 - ★", "Movie, - Title"},
{"The Movie, 2023 - ★★½", "The Movie"},
{"Movie Title, 2018 - ", "Movie Title"},
{"Movie Title", "Movie Title"}, // Edge case: No year or rating
{"2024, Movie Title", "2024, Movie Title"}, // Edge case: Year at the start
{"Movie Title - ★★★★★", "Movie Title - ★★★★★"}, // Edge case: Rating but no year
}
for i := range tests {
title := tests[i].title
expected := tests[i].expected
actual := GetMovieTitle(title)
require.Equal(t, expected, actual)
}
}
Once I was happy with this I then looked into my second issue. The URL field contained a URL for the user’s log of the movie rather than just the regular URL of the movie’s Letterboxd page.
For example, if a user called ‘MovieWatcher’ logged the movie ‘Deadpool & Wolverine’, then the URL field would be ‘https://letterboxd.com/MovieWatcher/film/deadpool-wolverine/' rather than ‘https://letterboxd.com/film/deadpool-wolverine/'.
I needed to remove the username part of the URL, so I created another function and made use of another regex expression to achieve this.
const (
Url = "https://letterboxd.com/"
movieUrlWithUsername = `https:\/\/letterboxd\.com\/([^\/]+)\/`
)
func GetMovieUrl(movieUrl string) string {
// Get Letterboxd item link without the username
// Replaces "https://letterboxd.com/USERNAME_HERE/film/MOVIE_TITLE/" with "https://letterboxd.com/film/MOVIE_TITLE/"
usernamePattern := regexp.MustCompile(movieUrlWithUsername)
formattedUrl := usernamePattern.ReplaceAllString(movieUrl, Url)
return formattedUrl
}
Again, the same as the previous function, I wrote some unit tests for this new function as well to ensure it worked correctly.
func TestGetMovieUrl(t *testing.T) {
tests := []struct {
url string
expected string
}{
{"https://letterboxd.com/USERNAME_HERE/film/Movie/", "https://letterboxd.com/film/Movie/"},
{"https://letterboxd.com/USERNAME_HERE/film/Movie-Title", "https://letterboxd.com/film/Movie-Title"},
{"https://letterboxd.com/USERNAME_HERE/film/Movie-Title-and-the-movie-title", "https://letterboxd.com/film/Movie-Title-and-the-movie-title"},
}
for i := range tests {
url := tests[i].url
expected := tests[i].expected
actual := GetMovieUrl(url)
require.Equal(t, expected, actual)
}
}
Up next, displaying what I was reading…
Books
At the time I was already using GoodReads (extremely infrequently) to track what I was reading. So I thought it was a good starting point to look into if I could get my reading history via RSS from GoodReads.
It turned out that GoodReads did provide an RSS feed, but it has a lot of issues. For some reason, the feed wasn’t a valid feed and it was lacking some granularity - I couldn’t group books in the way I wanted and in the end, I decided to start looking into a new book tracking service to track what I was reading/had already read.
Sophie mentioned a website called Oku in her blog post, so I decided to look into that first. After playing around with the website, I found that it was perfect for what I wanted!
The UI for Oku is way better than GoodRead’s site (sorry to any hardcore GoodRead users), I could create ‘collections’ to sort and organise books, and also track what I’d already read, what I wanted to read in the future as well as what I was currently reading. The best part of all of this was that each ‘collection’ had a separate RSS feed!
Initially, I was a bit puzzled about how to find the RSS feed URL for a collection, but Oku had already written a guide on how to find this URL. It was hidden away in the page source. Not a massive issue, but it would’ve been nice to have an RSS icon somewhere in the front end instead. The guide also included a few examples of others who were displaying what they were reading on their own websites using these RSS feeds.
Since both books and movies were being fetched via RSS feeds, I decided to make the initial getLetterboxdItems
function into a more generic function that could be used with any valid RSS feed link. I ended up renaming it to getGoFeedItems
.
func getGoFeedItems(input string) ([]gofeed.Item, error) {
var feedItems []gofeed.Item
feedParser := gofeed.NewParser()
feed, err := feedParser.ParseURL(input)
if err != nil {
return nil, err
}
for _, item := range feed.Items {
feedItems = append(feedItems, *item)
}
return feedItems, nil
}
I then also adapted the latestItems
function that would also work with non-Letterboxd RSS feeds. I decided to implement this via a branching logic flow that checked if a link for an item had a prefix of “https://letterboxd.com” to determine if it was a Letterboxd feed item. The final function was called latestGoFeedItems
.
func latestGoFeedItems(items []gofeed.Item, count int) []map[string]string {
var itemSlice []map[string]string
for i := 0; i < count; i++ {
item := make(map[string]string)
if strings.HasPrefix(items[i].Link, "https://letterboxd.com") {
item["title"] = letterboxd.GetMovieTitle(items[i].Title)
item["url"] = letterboxd.GetMovieUrl(items[i].Link)
} else {
item["title"] = items[i].Title
item["url"] = items[i].Link
}
itemSlice = append(itemSlice, item)
}
return itemSlice
}
Video Games
Next up was figuring out how to get a list of video games I was either currently playing or had recently played across all systems I own (at the time of writing, I play video games on my Nintendo Switch and my gaming PC).
Again, referring to what Robb and Sophie had already looked into, I found that Robb was scraping his latest trophies from psnprofiles.com for PlayStation games and Sophie had looked into rawg.io. Unfortunately, neither of these would work for what I was looking to do with my now page. I don’t own a PlayStation so the functionality around PSN logging was a bit redundant for myself. Sophie mentioned in her post that scraping wasn’t possible with rawg.io as the whole site was a single-page app and that it didn’t actually log what games you were currently playing, but rather just games that you have.
My search led me to an alternative website called Backloggd. The site is free to use and includes games from every platform (powered by The Internet Game Database).
I ended up creating a profile and started adding in games I’d already played, games I wanted to play (or more specifically games on my ‘backlog’) and then finally I added the games I was currently playing. The UI was easy to use, and finding games was a breeze (a manual process unfortunately but still great overall), but that was when I came across my main issue with the service…
There were no RSS feeds or API that I could use to fetch data on games I was currently playing 🙃
I ended up joining the community Discord to see if this had been planned for development. I found a few mentions of other users requesting RSS feeds or an API but it didn’t seem to be prioritised and there was no official news from the site developer on if this was going to be added any time soon.
Despite this clear setback, I wasn’t giving up just yet. In the past, I’d done a lot of web scraping using Python and given I could access my profile page without having to log in, I started wondering if I could web scrape the page using Golang 🤔
Web scraping in Go isn’t something I’d done before, so I first started by checking if there was a package similar to Python’s selenium. My search led me to two packages. A version of selenium for Go and Colly.
Using selenium would have required downloading/maintaining some dependencies (mainly a web driver) so I looked into Colly as a lighter-weight alternative. After going through some of the docs, and testing what I’d learned via some tutorials, I was able to put together a great little sub-function that scrapped the games under the ‘Playing’ section of my Backloggd profile 🎉
Similar to Letterboxd movies and Oku books, as I was also able to scrape the page URLs for each game, I added those to my now page as well for each game I was currently playing.
My web scraping function returns a map of items consisting of titles and URLs. Since I could be playing 3 or more games at a point of time (not all at the same time of course), I did not end up adding anything that limited the number of items included in the map.
func GetGames(url string) ([]map[string]string, error) {
var games []map[string]string
c := colly.NewCollector()
c.OnHTML("div.rating-hover", func(e *colly.HTMLElement) {
game := make(map[string]string)
game["title"] = e.ChildText("div.game-text-centered")
game["url"] = Url + e.ChildAttr("a", "href")
games = append(games, game)
})
err := c.Visit(url)
if err != nil {
return nil, err
}
if len(games) == 0 {
err := errors.New("no games found")
return nil, err
}
return games, nil
}
TV Shows
I watch quite a few TV shows and I’ve always found keeping track of them difficult. I really liked Letterboxd but it didn’t really support TV shows and I really wanted something similar to track what I was watching.
I found two websites that offered what I wanted; Trakt and Serializd.
Looking into Trakt first, I found that they did offer an RSS feed I could make use of, however, this was a feature that was only available to paid VIP users of the site. While the subscription wasn’t too much, I had little use of all the other features that were included in the VIP offering so I decided to look into Serializd instead.
Serializd has a great UI and an easy way to track and log what you’ve watched. Unfortunately, it didn’t offer an RSS feed but I ended up doing some digging into the network calls the website made via Chrome dev tools and found some API endpoints that could be used to fetch recently watched TV shows and episodes via the diary API endpoint 🎉
Using this URL endpoint, I crafted a new HTTP GET request with some required headers and created a client to make the HTTP request and capture the response from the website.
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
// Request headers
req.Header.Set("Accept", "application/json, text/plain, */*")
req.Header.Set("Accept-Encoding", "gzip, deflate, br, zstd")
req.Header.Set("Accept-Language", "en-US,en;q=0.9")
req.Header.Set("Dnt", "1")
req.Header.Set("Referer", url)
req.Header.Set("Sec-Ch-Ua", `"Chromium";v="123", "Not:A-Brand";v="8"`)
req.Header.Set("Sec-Ch-Ua-Mobile", "?1")
req.Header.Set("Sec-Ch-Ua-Platform", `"Android"`)
req.Header.Set("Sec-Fetch-Dest", "empty")
req.Header.Set("Sec-Fetch-Mode", "cors")
req.Header.Set("Sec-Fetch-Site", "same-origin")
req.Header.Set("User-Agent", "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36")
req.Header.Set("X-Requested-With", "serializd_vercel")
client := &http.Client{}
rsp, err := client.Do(req)
if err != nil {
return nil, err
}
defer rsp.Body.Close()
if rsp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("unexpected status code: %v", rsp.StatusCode)
}
This API exploration also led to me mapping out a number of these API endpoints the website had and I ended up putting together a small Golang package called unserializd, as an unofficial way of accessing public data from profiles using Golang.
I found that the contents of the response were not usable initially as the response was ‘gzipped’. To get around this, I added another part to the code to check if the response was gzipped and read the contents of the gzipped response using a NewReader function from the gzip package.
// Check if the response is gzipped
var reader io.Reader
if rsp.Header.Get("Content-Encoding") == "gzip" {
gz, err := gzip.NewReader(rsp.Body)
if err != nil {
return nil, err
}
defer gz.Close()
reader = gz
} else {
reader = rsp.Body
}
body, err := io.ReadAll(reader)
if err != nil {
return nil, err
}
Now that the data was no longer gzip encoded, it was time to read the contents of the response. As the contents of the response were JSON encoded, I first created a diary
struct in Go to match the data that was to be unmarshaled.
type SerializdDiary struct {
Reviews []SerializdDiaryReview `json:"reviews"`
TotalPages int `json:"totalPages"`
TotalReviews int `json:"totalReviews"`
}
type SerializdDiaryReview struct {
ID int `json:"id"`
ShowID int `json:"showId"`
SeasonID int `json:"seasonId"`
SeasonName string `json:"seasonName"`
DateAdded string `json:"dateAdded"`
Rating int `json:"rating"`
ReviewText string `json:"reviewText"`
Author string `json:"author"`
AuthorImageUrl string `json:"authorImageUrl"`
ContainsSpoiler bool `json:"containsSpoilers"`
BackDate string `json:"backdate"`
ShowName string `json:"showName"`
ShowBannerImage string `json:"showBannerImage"`
ShowSeasons []ShowSeason `json:"showSeasons"`
ShowPremiereDate string `json:"showPremiereDate"`
IsRewatched bool `json:"isRewatched"`
IsLogged bool `json:"isLogged"`
EpisodeNumber int `json:"episodeNumber"`
}
type ShowSeason struct {
ID int `json:"id"`
Name string `json:"name"`
SeasonNumber int `json:"seasonNumber"`
PosterPath string `json:"posterPath"`
}
var diary SerializdDiary
if err := json.Unmarshal(body, &diary); err != nil {
return nil, err
}
After unmarshalling the response, the part I was interested in was just the reviews section. So I stored the contents of that in a new variable and then looped through the slice of reviews to build a map of shows (with the season I watched) along with the show’s URL and stored this in the shows variable.
var shows []map[string]string
reviews := diary.Reviews
for r := range reviews {
show := make(map[string]string)
var showAndSeason string
review := reviews[r]
reviewSeasonID := review.SeasonID
// Loop through review.showSeasons to find season name using review.SeasonID
for s := range review.ShowSeasons {
season := review.ShowSeasons[s]
if reviewSeasonID == season.ID {
review.SeasonName = season.Name
}
}
// format showName with SeasonName and store in output
showAndSeason = fmt.Sprintf("%v, %v", review.ShowName, review.SeasonName)
show["title"] = showAndSeason
// get show url
const showBaseUrl = "https://www.serializd.com/show/"
showUrl := showBaseUrl + fmt.Sprint(review.ShowID)
show["url"] = showUrl
// Append show to shows if shows["title"] is not present in the map
if !containsValue(shows, "title", show["title"]) {
shows = append(shows, show)
}
}
Before adding a show to the shows slice, I first wanted to check if the show’s title was already present in the shows slice and only add the show if it was not present to prevent duplicates. To help with this, I wrote a small utility function containsValue
.
func containsValue(slice []map[string]string, key, value string) bool {
for _, m := range slice {
if _, ok := m[key]; ok {
if val, ok := m[key]; ok && val == value {
return true
}
}
}
return false
}
To then limit the number of shows to display on my now page, I created a new function that would return the three latest shows in the map through the use of a count input variable.
func LatestShows(items []map[string]string, count int) []map[string]string {
var shows []map[string]string
for i := 0; i < count; i++ {
shows = append(shows, items[i])
}
return shows
}
Bonus - Travel stats
Finally, as a bonus, I wanted to see if I could also track places I’d visited automatically. I already made use of the trip tracker feature on NomadList and I noticed that there was an ‘Export as API’ option on my NomadList profile… 👀
Clicking this led me to a URL with my public data on the site in JSON, including previous trips!
Getting the data from this URL was no trouble, the main issues I faced were rather how I wanted to structure the data. With all the other previous media types, I had defaulted to only displaying up to three of the latest items for each media category. For travel stats, I wanted to include more than just the latest three trips I’d taken.
Initially, I thought about only displaying travel stats for the current year but realised this could remain empty towards the start of the year. So instead I thought about adding three years worth of travel stats but that also seemed a bit too much and would defeat the purpose of a ‘now’ page.
In the end, I compromised and added travel stats for the current year and the previous year. The code to handle this isn’t the prettiest and I have to admit, that parts of it are hard coded (boo) rather than being dynamic, but it got the job done. for 2023 and 2024 travel stats, the function TripsInYear
takes the entire trips slice and a year in the format of a string and returns a subslice of the trips from that year only.
tripsIn2024 := nomadlist.TripsInYear(countries, "2024")
tripsIn2023 := nomadlist.TripsInYear(countries, "2023")
func TripsInYear(tripsInput []map[string]string, year string) []map[string]string {
var tripsOutput []map[string]string
for _, trip := range tripsInput {
if trip["start_date"][0:4] == year {
tripsOutput = append(tripsOutput, trip)
}
}
return tripsOutput
}
Travel edge cases
I did come across a few minor edge cases with this travel data, so I created some mini functions to handle these.
My trip tracker from NomadList included when I was back in London but I didn’t want to pull through ’trips’ to London on my now page. The first mini function I made was to remove all trips from the slice that has a value of London for the ‘place’ key.
tripsIn2024 = removeLondonTrips(tripsIn2024)
tripsIn2023 = removeLondonTrips(tripsIn2023)
func removeLondonTrips(countries []map[string]string) []map[string]string {
var filteredCountries []map[string]string
for _, trip := range countries {
if trip["place"] == "London" {
continue
}
filteredCountries = append(filteredCountries, trip)
}
return filteredCountries
}
Next I wanted to remove duplicate countries from my trips slice. In this function I end up making use of the containsValue
utility function I wrote earlier to only append countries to the slice if a trip[“name”] was not present in the slice.
countriesIn2024 := removeDupes(tripsIn2024)
countriesIn2023 := removeDupes(tripsIn2023)
func removeDupes(trips []map[string]string) []map[string]string {
var countries []map[string]string
// sorts trips from oldest to newest
slices.Reverse(trips)
for _, trip := range trips {
// check if a trip["name"] is present in the slice countries
if !containsValue(countries, "name", trip["name"]) {
countries = append(countries, trip)
}
}
return countries
}
Removing the London edge case and duplicate countries was simple, however removing other trips in ‘England’ wasn’t as easy as the default ‘country’ for England is actually just listed as ‘UK’ on the NomadList website.
This led to a small issue with getting my travel stats on my visit to Scotland to pull through correctly, so I ended up not excluding UK trips from my 2023 travel stats (Scotland was the only non-London place I visited in 2023) through the use of a third mini function (I admit that this was a bit of a hacky fix).
tripsIn2023 = addScotlandTrip2023(tripsIn2023)
func addScotlandTrip2023(countries []map[string]string) []map[string]string {
var filteredCountries []map[string]string
for _, trip := range countries {
if trip["name"] == "United Kingdom" {
trip["name"] = "Scotland"
}
filteredCountries = append(filteredCountries, trip)
}
return filteredCountries
}
Country flag emojis
Showing just the name of a country I visited on my now page seemed a bit boring. I wanted to also include a country’s emoji flag to each entry on my page. I found that in addition to being able to fetch countries from trip data on NomadList, there was also a field containing the country’s three/two-letter country code.
After a search online, I managed to find a go package called Go Emoji Flag that would be able to convert a country code into a flag emoji.
Using the country code from each trip, I performed a quick look-up and obtained the correct flag for each country that I visited and added that to the script as well.
const NoCountries = "Haven't visited any countries recently"
func formatCountries(countries []map[string]string) string {
var formattedText string
var countryEmoji string
if len(countries) == 0 {
formattedText = NoCountries + "\n\n"
return formattedText
}
for i := range countries {
// UK country code needs to be GB to fetch correct emoji flag
if countries[i]["code"] == "UK" {
countries[i]["code"] = "GB"
}
// Handles Scotland edge case
if countries[i]["name"] == "Scotland" {
countryEmoji = "\U0001F3F4\U000E0067\U000E0062\U000E0073\U000E0063\U000E0074\U000E007F"
} else {
countryEmoji = emoji.GetFlag(countries[i]["code"])
}
countryText := fmt.Sprintf("%s %s\n\n", countryEmoji, countries[i]["name"])
formattedText += countryText
}
return formattedText
}
With that wrapped up, I was then left with my more ‘static’ or ‘infrequently updated’ parts of my now page…
Updating static content
Static content is what I would consider parts of my now page that did not update very frequently and would still likely need to be updated manually by myself. This included the ‘What I’m up to, Learning and Fitness’ sections on my now page.
Having already built the automation script I thought about how I could add this static content into the script. I decided that it was best to store the static contents in another markdown file, titled ‘static.md’ and then read the file as part of the automation script.
staticContent, err := os.ReadFile("static.md")
if err != nil {
log.Fatalf("unable to read from static.md file. Error: %v", err)
}
Sure it meant that there was still an element of needing to manually update this file but given the nature of how infrequently it needed to be updated, I was happy with this solution.
Setting up GitHub Actions
After adding a bit more to format the now page how I wanted it to be, the Go script was finished 🎉
After compiling the Go script into an executable, I added it to my website’s repository in a new scripts directory.
Then using what I had learnt from the tutorial from Victoria, I put together a GitHub Actions workflow to run the script daily and add the update now.md file as a commit to my main branch. This then triggered a new build of the site being made via Cloudflare Pages and shortly after the now page on my site would be updated 🥳
name: update-now
on:
schedule:
- cron: '0 1 * * *'
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: get branch
uses: actions/checkout@main
with:
fetch-depth: 1
- name: run script
run: |
cd ${GITHUB_WORKSPACE}/scripts
./automate-now
- name: deploy
run: |
git config user.name "${GITHUB_ACTOR}"
git config user.email "${GITHUB_ACTOR}@users.noreply.github.com"
git add .
git commit -m "🍱 Dynamic now page update"
git push --all -f https://${{ secrets.GITHUB_TOKEN }}@github.com/${GITHUB_REPOSITORY}.git
Manual stat tracking
Now you might have noticed that a lot of the tracking of various stats is still actually done manually, but instead of manually tracking things by editing my now page it’s via third-party services like Letterboxd, NomadList and Oku. This is something that I’m totally happy with.
The third-party services give me an easy of of tracking and logging things while also giving me the ability to select and pick what I want to share rather than sharing everything by default. For now, I think this is a good compromise.
Summary
Since setting this all up, my now page has been updated automatically daily without any major issues, so I’m really happy with how this project turned out. I managed to learn quite a bit in the process (including writing my first generic function!), improved my Go scripting skills and created something that solved a real problem I had.
You can view my now page here.
There are some things I’d like to add or improve to the script such as more unit tests (especially around API calls) and adding poster images for media to the now page but I’ve decided to pause this work for now to work on other projects in the meantime.
Similar to what Sophie said in her post, I do wish that more apps/websites offered an RSS feed or an API. There are a ton of amazing things that could be built if data across these services were a bit more available to individuals.
Everything should have an API.
Interested in seeing how the code works? You can find the source code here on GitHub.