Open the gate.. welcome to this dimension..
Well.. a lot of 'grunt work' getting 'behind the scenes' things working for 'Rebuild 22'
There are two types of images: 1) never changing and generally fixed size (e.g. logos etc), and 2) photos, graphs, graphics et. al. which are likely to be used in a variety of circumstances, and a variety of sizes (e.g. thumbnail, 'on page' size, full size in a 'lightbox').
As explained earlier long experience tells me you need to plan for changes in image handling right from the outset - for both sanity and efficiency.
To that end all source images 'live locally' in either /images/noderivs/* or /images/originals/. No derivatives (noderivs) are type 1, and originals are type two. A local script takes all the images in originals and:
bicubic resizes and resamples into multiple sizes at 500, 750, 1000, 1500, 2000 pixels, each in their own sub directory
where possible, extracts all available metadata from original image, and stores the metadata for all images into a single JSON file (of course at some stage this may become prohibitively large, but probably not in my lifetime :-P)
example exif metadata in JSON
"Orientation": 1,
"Software": "Android CPH2025_11_C.78",
"Exif_IFD_Pointer": 74,
"DateTimeOriginal": "2022:06:17 10:01:36",
"UndefinedTag:0x9011": "+09:30",
"SubSecTimeOriginal": "609",
"ColorSpace": 1,
"ExifImageWidth": 1304,
Page metadata is handled twice, for two different reasons.
The primary purpose of page metadata (or 'front matter') is to 'populate' on page display items for each post 'automagically' (e.g. author, date, categories etc.).
If desired page metadata can also be used as 'control signals' - these are read and used by the display framework to trigger various things (e.g. 'lightbox: yes' when detected, can be used to add loading of 'lightbox' (i.e. venobox)) javascript code into the page header for this page). Whilst this isn't very 'pure' (a purist would argue that using front matter for control/state is a bad idea) it is pragmatic, and generalisable, and therefore presents a mechanism to expand adhoc functionality in the future with little effort because it's 'already there', tested, and working..
This first use is 'dynamic' and happens as summary pages or direct page 'views' happen.
The second use is more preparatory. A local script scans all content files, down through the entire file hierarchy, recording each category and tag value and which content files use that category/tag. This is saved in a single JSON file. It can be used in the future for category and/or tag 'views' functionality, e.g. 'by category' summary pages, tag 'word clouds', etc.
example category metadata in JSON
"bureaucracy":
[
"2009/20090217.md"
],
"potpourri":
[
"2009/20090624.md",
"2009/20090628.md"
],
"personal":
[
"2010/20100518.md"
],
"economics":
[
"2022/20220614.md",
"2022/20220615.md",
"2022/20220622a.md",
"2022/20220622s.md",
enough for today..
Other posts on 'rebuild2022'
Cybergate9.Net
[Original]