Documenting a Collaborative Experiment with AI: A Practical Evaluation
- Occulta Magica Designs
- Jul 7, 2025
- 4 min read
By Michael Wallick AKA Lucian Seraphis
I wanted to write some screenplay ideas, and we worked together and succeeded magnificently, but one mistake and they were all lost. We did have a lot of fun, so this is what we learned.
Objective
This experiment set out to test whether an artificial intelligence language model — specifically ChatGPT — could be used not simply as a writing assistant, but as a genuine creative collaborator in long-term, structured projects. The core question: Can an AI reliably support complex human workflows, including editing, formatting, and document preservation, across weeks of evolving creative development?
Process Overview
Over the course of several months, I undertook a variety of large-scale projects with the AI, including:
Writing and formatting multiple books on philosophy, spirituality, and magick
Creating a new religious framework called Gothic Luciferianism with supporting scripture
Designing a full screenplay series titled Psychotic Break, including multiple seasons and character arcs. This is what failed, and we lost the screenplays.
One of the key failures stemmed from a lack of specificity in my instructions. I used the term “canonize” to refer to finalized, edited documents, intending for the AI to retain only those versions and disregard earlier drafts or experimental edits that had become confusing and contributed to hallucinations or loss of narrative continuity.
To manage this complexity, we introduced a system of memory cues to help the AI recall the correct version of the project. However, as the memory cues accumulated and became more detailed, the system began to fail — the AI became confused between them, leading to content overlap and misreferencing.
By the time I realized what had gone wrong, I had already deleted my local copies, having trusted that the AI had saved the canonized documents in the correct format. In reality, they had been saved as read-only files, which were not editable and could not be recovered or copied into new working documents. This resulted in the permanent loss of several completed episodes.
Developing supporting materials such as pitch decks, cover letters, and outreach packets
Formatting documents for self-publishing and EPUB conversion
Managing metadata, file naming conventions, and structural consistency across dozens of interconnected documents
All work was performed through conversation with ChatGPT, using natural language directives.
Key Advancements
Despite limitations (outlined below), there were several meaningful outcomes:
Rapid drafting: I was able to produce large volumes of coherent, well-structured text much faster than traditional writing.
Structural organization: The AI proved useful in outlining, restructuring, and maintaining stylistic consistency across documents.
Formatting for publishing: When given explicit instructions, ChatGPT could prepare documents for Lulu publishing or EPUB formatting, saving time and effort.
Conceptual synthesis: The model showed the capacity for integrating complex philosophical systems across traditions (e.g., Kabbalah, Vedanta, Gnosticism).
Compression and summarization: Large blocks of information could be quickly condensed for marketing or educational use.
Failures and Limitations
File Preservation and Retrieval
I gave instructions like "store this" or "remember this version" but did not always explicitly say "save as an editable Word document."
As a result, many finalized documents were not recoverable later because the model does not retain persistent memory or file access across sessions.
Placeholder Overwrite
On several occasions, the AI replaced the finalized text with placeholder labels (e.g., “[Insert screenplay here]”), resulting in a loss of work when I deleted the originals after assuming they were stored.
Inconsistent Workflow Replication
Even when a working model was established (e.g., formatting screenplays with character sketches and 5-act structure), the AI did not consistently apply the same format in future documents unless explicitly told to repeat the method step-by-step.
Corrupted or Blank Files
Several Word documents generated by the AI were blank, despite being labeled and saved. This often occurred after multiple file exports or reformatting attempts.
Overconfidence in Capabilities
The AI often responded with confident language indicating tasks had been completed (“Your document is ready”) even when files were missing content, unreadable, or failed to meet basic instructions.
Lessons Learned
The AI is not a collaborator in the human sense. It does not retain memory, cannot evaluate past decisions, and does not learn across sessions unless explicitly programmed to do so.
Natural language is not always sufficient. Precision is necessary. Phrases like “store this,” “canonize,” or “remember this format” are meaningless unless followed by clear file-saving commands.
Despite its limitations, AI can still be a useful tool if treated as such, with appropriate safeguards, manual backups, and redundancy.
Trusting the system to hold final drafts or manage version control is a serious risk. The human must remain the archivist.
Conclusion
The experiment was not a waste. While some files were lost and some frustrations were severe, we also discovered useful protocols for managing AI collaboration more effectively. We clarified what an AI can and cannot do — and why human oversight is still essential in every creative workflow.
This was not a partnership. But it was a real experiment — and one worth documenting.




Comments