The Best Time to Post on Hacker News

Jun 16, 2020

I’m starting a new blog and I have the ambition of attracting high quality traffic very quickly. Marketers recommend scheduling your posts to reach the largest audience possible, so what really are the best hours for Hacker News, a community of night owls?

There are two notable existing analysis: from 2014, and from 2017, but none is up to date, none is being updated continuously and I would expect some patterns to change.

Fortunately, there is no need to scrape anything, as Hacker News items (stories, posts, jobs and polls) are available as a BigQuery public dataset, so we can find a crude answer in an hour, without resorting to R or Python, using only BigQuery and Data Studio.

For the impatient:

(the report as a PDF)

What it shows is the total number of the top 10 daily posts, since 1/1/2017 for each hour (US/Central) and day of week. For example: the 123 top posts were submitted on Wednesday 1PM, and only 39 on Saturday at 4AM. It seems that the best time to post on Hacker News is between 7AM and 1PM, and the ideal time is Tuesday 11AM (why?).

The query:

It creates a new BigQuery table, named maurycy_hn.top_10_since_2017 (please create your own dataset and use it instead of maurycy_hn if you want to play), that contains the top 10 daily posts for each day since 1/1/2017. I then used Data Studio to connect to the data and visualize it using a pivot table with heatmap. Voilà!

The query is not optimal (some mathematics would help!) but the main issue is that it’s possible to be on the front page, as seen on the /front, the Hacker News Wayback, without making it into the daily top 10. The dataset does not provide information directly, but it does not matter, though: the front page is not an end in itself, less busy times deliver fewer visitors (eg: when all night owls sleep).

Updating the report in real time would take removing a temporary table (ie: maurycy_hn.top_10_since_2017) and importing the SELECT directly in Data Studio, but pricing makes it risky. The query processes over ~500MB data, so it quickly adds up and only the first 1TB of query data processed per month is free.

If you’re not exactly a Google Cloud person, please feel free to download the result of the query (as a CSV) and use other tools to play with it.

PS. Honestly, I believe that great content (copywriting?), like this one, always wins, so please take the analysis with a grain of salt.

If you want to be notified about new posts:

About me

maurycy

/maˈwrɨ.t͡sɨ/

Data and backend.

[email protected] or [email protected]