Open the gate.. welcome to this dimension..





Cybergate9.Net Rebuild 2022

⊰ 2022-06-22 by shaun ⊱

Well, it seems it's that time again. I had a few things I wanted to add to the site, wanted to do a bit restructuring, and then..

What was wrong with the old site?

Well.. nothing, per se..

But.. the previous build was done using hugo, and that was ok as far as it goes..

But hugo becomes very not lightweight once you get past a basic structure..

In fact if you want to do any stuff you'd normally doing with php/javascript/css (say a custom lightbox with exif data) in some ways it's more complicated in hugo because you have to learn the hugo way. Which frankly is just a bit tiresome (learning something new and 'jumping through hoops' just to produce a static web site in 'hugo speak' - let's face it we already have html/css for a reason). And it was also using bootstrap which is pretty much the 'entire kitchen' when I only really need a knife and a fork..

Where have we been?

Well.. Cybergate9.Net has been around since circa 2001/2 and all the original stuff was built using the traditional build tools of the time, i.e Frontpage, Dreamweaver, or just plain HTML/CSS.

Two existing sub-sites were built on Dreamweaver templates:


Around 2005 I authored a framework (the old phpSiteFramework) which ran The Fitzwilliam Museum's Website(s) until, I think, 2014/5 when we moved it over to Drupal. Of course I also built a number of independent sites using the framework, over that 10 years, including multiple versions of Cybergate9.Net.

There was also a period of time where most blog stuff was being delivered by my own install of Wordpress, but once they started to try and make that 'all things to all people' (instead of just a blogging platform), it began to become a serious security liability (it went through a phase of being totally pwned fairly regularly) so I just stopped using it.

Simultaneously, of course, flickr, and facebook were 'things of the moment'. pinterest, twitter etc. are also places I visited briefly on and off.

So, it turns out I have quite a bit of content, spread over a few platforms, or in databases, or archived off platforms I no longer use (like flickr), and in an ideal world I thought it might be nice to bring it all 'back together again'.

Where are we going?

So considering all of the above, and some time on my hands to be bothered to tackle this type of task, I've begun to think about the requirements for the thing which is to 'be built'. The list looks something like:

  • with the old phpSiteFramework, turn it into the new PHP Siteframework, and use it as the base 'delivery engine' because it's battle tested, very efficient, very versatile, and yet is still a pretty straight-forward code base to maintain.

  • be a bit old fashioned, i.e. build to be like 'Web 1.0'. In practice this means being 'well behaved' on things like accessibilty, page 'weights', no javascript unless absolutely required, whilst still achieving modern best practice like responsive design, decent typography, etc.

  • use latest standards: content from html, markdown, json, or php. Deliver using straight html5 + css4 where possible.

  • a suite of tools to migrate content (be it from wordpress, mysql, facebook JSON, or plain html) into a standardised markdown format. Part of the logic here is if everything is standardised into a known format and directory structure then, theoretically, it can be used as the source data to create any output format (i.e. website html, json, pdf, word docs, or whatever) given the right tools.

  • a tool set for processing images taking into account 'web optimised' derivatives and their metadata (exif/iptc etc.). The thinking here is to always keep a set of orginals with their metadata (as these can be edited in standard workflow with ON1 Photo Raw or Lightroom or whatever) separate to whatever the website needs. Experience has proved this is the least painful way of going about this because things change, and if for example you only have a set of 500px derivatives and all of a sudden you want a set of 1000px derivatives, where are all the originals? You should simply be able to generate a 'new derivative set' for whatever you want (thumbnails, high-res tile sets, or whatever).

What's been done?

  • the framework has been modified to:

    • be under git version control with master hosted at Github

    • implement caching in PHP natively (removing the requirement for Cache-Lite)

    • implement ability to deliver content from markdown using Parsedown

    • implement standalone ability to process metadata or front matter from markdown file using a very simplistic yaml parser (read 'em in, 'split' on colons, doesn't get any simpler :-P)

  • image tools are basic, but working. They can derive arbitrary sets of images based on 'longest sides' which gets you something which is a reasonable compromise between resolution/filesize (and hence payload on the wire). It simultaneously extracts metadata (EXIF and other) into a json data file to be used later.

  • determined new structure to date - Home, Witterings (writing with sub categories), and Ephemera (Resources)

Ongoing commentary in posts on 'rebuild2022'

some code things in here...

while (life = shit) do
    - something different;
    - toughen up princess;
    - repeat;

computerised philosophy? 😛

you're at: Home > Rebuild22