Thanks to visit codestin.com
Credit goes to blog.ouseful.info

Elsewhere…

Last year, I “retired”, taking the early option to leave via a mutual resignation scheme.

The break was pretty much a clean one, except for a visiting position to see out getting JupyterLite and browser executed R code deployed to support a student in a secure environment (one of His Majesty’s pleasure facilities).

I’m not sure if any other modules are aware that such a thing is possible — depending on which direction the planned update to the data analysis and management module went, there is no reason at all why that couldn’t have offered all the student practical activities as runnable via and within the browser, and as such, been offerable from a technical point of view, and from what I’ve seen of many of the science and other computing modules that include coding activities, they could also in principle execute code solely within the browser (which includes on smart phones) — but I hope that we at least manage to set a precedent showing that it is technically possible to support student coding activities inside such environments. (The OU is a bit like the old NASA model, where it’s difficiult to fly things in space that haven’t already been flown. The very few successes I had in the OU were setting precedents…)

Anyway — this blog is largely now defunt. I was minded to post very briefly just now on a blog I haven’t looked at for getting on for a decade, the Digital Worlds — Distored Reality blog — The Web as Alternate Reality — and may start to use that as a dumping ground to further document the nightmare descent of that parallel nonsense world, but I spend most of my time now in the world of story.


If you must…

I admit to using genAI for coding but I’m still reluctant to engage with in pretty much any other context except a bit of OCR (particularly with not-quite-Roman alphabets such as Old and Middle English texts), but eben then with a lot of care (genOCR doesnlt like minority positions, preferring the statistically most likely next token; for a digital humanities scholar that could potentially lead to an interesting way of probing biases, but as I don’t want to contribute to the genAI world, even by being critical of it (which admits of it) ’nuff said.

That said, if you must, be critical. Paul Bradshaw is as good a read as anyone I follow in my feeds on being measured in how you make use of such tools. For example, 4 ways you can ‘role play’ with AI and “Journey prompts” and “destination prompts”: how to avoid becoming deskilled when using AI.

But even so, pretty much everything about it still all leaves me cold (not even considering the consequences for financial markets in the short term, then social consequences of the bail outs for those who had in the crash aftermath), and I see no real fun in exploring this tech at all, just the potential for misdirection, manipulation, misapprehension and lots of other negatives beginning with m.

I’m so glad I got of out working for an org that, as with most others, feels obliged to engage with it. The tech I embraced over the years, I knew where it fell short / didn’t cope well, I had a sense of whwre it was likely to go over the next couple of years (and into the five year + horizon that OU courses were developoed and had to deliver into), and I understood why that meant other folk felt they couldn’t use it (because it didn’t”always” work, or didn’t necessarily work “100% properly”, even if you knew where / how it wouldn’t work and what its failure modes were). Now, pretty much everyone is embracing something that doesn’t work “100% properly”, etc., and for which it can be really hard to get a proper sense or understanding of what and how it’s broken. Which it is. Innately. And they’re rushing into a fog bank of uncertainty about how it will develop and where the risks / harms are, not forgetting that the techbro cos. are using the tobacco bros. playbook when it comes to, yeah, well, harm, whatever. Tobacco / sugar etc. crossed paths with biological evolution. Solcial media has been messing with psychology (which leadds to personal and social harm at the individual and family level). GenAI is messing with culture, which knackers things at a generational level. Turn it off.

Onto Amazon with some Island Tales storynotes

Noticing that Martin has announced the publication of his first novel — Publication Day! — along with a professionally designed cover, I thlought I might as well mention that I’ve also announced on my storytelling persona blog that I’ve started putting together workflows for getting my storynotes onto Amazon for distribution as print-on-demand paperbacks or as Kindle ebooks.

This includes scripted book cover generation using GOFC — good old fashioned code — that runs the same way each time, and a quarto publishing route that churns quarto markdown into LaTeX and then PDF or epub format.

Two or three years ago I used Lulu to run off a few copies of an earlier version these booklets using a Jupyer Book initiated production route, but the Amazon pricing is better when buying author copies that the Lulu route, so that was another reason to switch to Amazon.

Ideally, I’d do a larger print run with a printing house, but that requires an up-front order in the hundreds to get the price breaks, which not only means an up-front investment in the hundreds of squids for each booklet, but also storage and sales outlet finding problem, as well as shipping and logistics costs and overheads.

Along the way, I did use a bit of AI assist to customise some LaTeX includes to modify the quarto default formatting, to help with some of the regexes in creating a script to convert my legacy Jupyter Book MyST style markdown to qmd, and to fettle image production, such as resizing, autocropping and transparency handling using ImageMagick CLI commands.

I’ve also been exploring generating map related images…

Anyway, the books are in a series — Island Tales — and are available as both ebook and paperback from Amazon:

  • A Legend of Godshill reproduces Elder’s version of the tale as well as my take on it, Percy Stone’s lay, excerpts from 19th century tourist guides, and a biographical note on Abraham Elder; available here.
  • A Legend of Puckaster Cove includes Elder’s original version of the tale, something approximating my first retelling of it a couple or more years ago, and some historical notes that once again pull on 19th century travel guides to the Island; available here.

Running containers on Mac without Docker

Somehow I missed this, I guess in my run up to “retirement”… container is a tool that you can use to create and run Linux containers as lightweight virtual machines on your Mac. It’s written in Swift, and optimized for Apple silicon. A native — and official — Apple CLI tool for running containers… https://github.com/apple/container

AI shortcuts to nowhere…

Things like https://www.oreilly.com/radar/the-cognitive-shortcut-paradox/ (AI used by novices may give answers but means they miss out on learning etc.) put me in mind of things like telly talent shows, where folk are transported from nowhere to “stardom” in an afternoon, rather than spending years putting in the hours, gigging to no-one, and learning craft.

Moving to quarto and LaTeX…

For several years, I’ve been creating “storynotes”, collections of story variants for particualr folk tales, as well as collections of news stories contemprary to particular historical events that I can then use as the basis for developing my own accounts, and then stories, around particular events.

Now that I’m “retired”, I’m also looking for merch opportunities to try to claw back some of the costs associated with telling tales (a 2-3 hour folk night or open mic, at a pint of shandy every 40-50 minutes, = 15 squids of cost…). So I’ve started looking at producing booklets I can print-on-demand and then sell for a fiver.

I started off with a couple of runs using Lulu, (which I’d used to test publish some Island Tales storynotes a couple of years ago), using the Lulu book cover desginer to generate simple covers, but now I’m looking at Amazon, which is as cheap, if not cheaper, for one-off print on demand, plus it has the global reach in terms of marketing not just print-on-demand books but also ebook variants.

Up till now I’ve been using Jupyter Book (sphinx) but publishing, but Jupyter Book never really met its early promise, is a faff when trying to generate non-HTML outputs (PDF, docx, e-book), and, as I understand it, is in the process of moving to a myst/node rather sphinx/python build process. So instead, I’m moving to quarto, which is well-supported and rich featured, although I haven’t yet worked out a clean way of generating an index. (That said, the epub doesnlt seem to work for me with Amazon KDP; instaed, I have to load a docx into the KDP app, then generate the KPF file. That may simply be down to how i config the epub in the quarto build, or they may be some other issue.)

As my main output will be PDF, I’ve started hacking together LaTeX templates, admittedly with quite a lot of support from Claude.ai and ChatGPT, partly because they can generated the code quick than I can, partly because I don’t (yet) know how to write LaTex all that well.

For the inner book content, I’ve started putting together core templates for the different booklets I want to strat producing, handling things like page layout, headers, header undlerines, page numbering etc. (I’ve still not settled on if / how I want to use footnotes; I currently inline footnotes…)

I also have templates for blurb pages (copyright / about) and title pages etc.

For the covers, I’m using LaTeX templates that allow me to fix a design and then customise it with text, fine-tuned positing, and colour themes. (The idea is that books will be in series, and I want each series to have a similar look and feel.

As part of the cover, I can also (optinally) overlay debug lines.

To improve reproducibility further, I’m using customised .devcontainer in VS Code for the build process so that I can build a book or cover independently of whatever machine I’m on. (This extends to making sure the fonts I want to use are available. My original tests were done on a Mac and I’d unwittingly used fonts that ship with Mac but are commercial fonts. Adding fonts into the build process and then using those means I keep that part of the build reporducible too.)

For the Amazon publishing, the cover needs to mount an ISBN. Again, this is configurable.

The designs are rather utlitiarian (I’ve only just started out on this process too…), but then, so are the designs in the cover design tools on Lulu and Amazon.

The most important point, though, is that while I used genAI to help write the LaTeX, everyhting in the publishing process is scripted and tuned using static configuration files. The process is idempotent. I am in control. There is no genAI in the actual publication route that is likely to give a different result every time you run it.

A new website, built with Hugo

Being now rather tired of WordPress and the requirements for hosting, I thought I’d build my personal stroytelling website using Hugo. I’ve not looked at Hugo before, but the Hugoplate template made things pretty easy to get going, and with some approriately crafted prompts to the free Claude.ai plan, which seems happy to pull in content from Github repos and elsewhere as additional context, I got enough cribs to make a couple of customistions that I needed for my site.

Anywhere, it’s here at the relaunched montystoryteller.org site…

Next up, I’m pondering porting the tistales.org.uk site away from WordPress so I can drop the Reclaim Hosting plan and save pennies now I’m “retired”, particulalry given they’re stopping email hosting (which has been really unreliable for ages what with being spam blocked etc. Small email providers stand no chance…); wp2hugo looks like it might be interesting for this, but that’s for another day — I’m really trying to limnit my screentime, and have already spent too long at the keyboard today…

The Coming of the Confabulai…

According to Wikipedia, a consensual summary of human understanding, regarding a particular human memory error condition, [c]onfabulation occurs when individuals mistakenly recall false information, without intending to deceive.

Confabulation was originally defined as “the emergence of memories of events and experiences which never took place” … [and is] symptomatic of brain damage or dementias.

Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false. Although individuals can present blatantly false information, confabulation can also seem to be coherent, internally consistent, and relatively normal.

It also appears to be associated with Wernicke–Korsakoff syndrome:

People with WKS often show confabulation, spontaneous confabulation being seen more frequently than provoked confabulation. Spontaneous confabulations refer to incorrect memories that the patient holds to be true, and may act on, arising spontaneously without any provocation. Provoked confabulations can occur when a patient is cued to give a response; this may occur in test settings. The spontaneous confabulations viewed in WKS are thought to be produced by an impairment in source memory, where they are unable to remember the spatial and contextual information for an event, and thus may use irrelevant or old memory traces to fill in for the information that they cannot access. It has also been suggested that this behaviour may be due to executive dysfunction, where patients are unable to inhibit incorrect memories or to shift their attention away from an incorrect response.

That chat AI you use? That drunk in the bar who can talk about anything…

From Search to genAI…

Skimming through my old blog archive, I note a post from January 15th, 2006:

… if we set assessment questions/tasks that can be answered using a search engine, and a student locates/discovers a relevant resource that satisifies the assessment, so what? (This begs the question of course – what exactly is the utility of such questions, in any case?!)

Admittedly, it’s nice to know what the source of the information was, but a marking scheme can accommodate that easily enough (e.g. with every question carrying marks for the provenance of the answer).

Search engines are changing the landscape – if you know how to use one effectively to solve a problem, find a relevant knowledge source, and so on – then you are arguably better equipped than someone who can remember that the squaw on the hippopotamus is equal to the sum of the squaws on the other two hides.

To which Emma Duke-Williams commented:

… That’s where we have to amend the assessment so that the student can use the information they’ve found. Yes, I fully agree lots of information is out there, it’s whether or not the finder understands & uses it correctly that’s the challenge to setting a good assessment.

Twenty years and more we could have been developing pedagogies that respected and reflected the ready availablity of information, some of which was high quality, some not so. And now there is panic in the face of the ready availability of “answers”…