In the last few years, machine learning and large language models (LLMs) such as ChatGPT have made progress that is dramatic, surprising, impressive—and maybe even a little frightening. Many jobs seem less secure today than they were a few years ago—anything involving writing or planning or generating images or driving or operating machinery or even coding seems to be a candidate for an AI solution. Of course, there are many issues and limitations. LLMs hallucinate. They give out falsehoods very confidently and convincingly. They drive cars into the back of stationary vehicles. Still, anyone who as worked with the most advanced generative AI sees all sorts of potential world-changing applications. Most of these will be tried, and eventually we will understand this new tool, its potentials and limitations, its risks.
But for a techie like me, trying out these new applications is fun. And, for a Christian in tech, a calling.
I have beent thinking about how LLMs can or should affect the reading experience, especially at the CCEL. Modernizing text. Explaining or defining archaic language. Finding and correcting typos. Adding an interactive quiz or discussion option at the end of a chapter where readers and go deeper. I have a student who wants to make a system that enables a reader to have a chat with a fictional character in a novel.
For the CCEL, it would seem particularly useful to have the ability to ask a question such as “what does Augustine say about predestination” — and get a well-formulated answer with a list of links that you can click to get the most salient quotes. At CCEL, we are trying to build that capability using RAG techniques.
Even more fundamentally, we are thinking about how can LLMs help us understand our readings better. How can they make reading a more interactive, engaging experience? Can they teach us anything or motivate us to think more deeply?
[Image created by ChatGPT]
One of the most famous books on prayer ever written in English (Middle English, actually) is The Cloud of Unknowing. It was written by an anonymous author as guidance in contemplative prayer. I am participating in a group discussing that book.
As an experiment I instructed Claude.ai to act as a spiritual director who can guide a person in contemplative prayer as taught in that book. And it works surprisingly well! Maybe even disturbingly well. In using it a handful of times, I have found that it answered some of my questions more perceptively than any of the people I’ve asked. It has been actually helpful to me. I’ve found it helpful to think of it as an interactive journal, so I’ve named it Journee.
Journee encourages the user to consider their prayer more deeply, what they are seeking, what they do, and to write about it, as though in a journal. And this deeper awareness and consideration is a goal of any spiritual director.
A robot spiritual director. You can try it out, without any login, with privacy, here:
But is it a good idea?
At times, it seems to me that it works too well. It gives rich, “thoughtful” advice and encouragement that sounds as though it is coming from a person. It may seem to give better answers than the people that have been consulted. One risk is that users will trust the answers too much. It is an LLM. It can hallucinate. Its suggestions should be evaluated. It should be thought of as suggesting things to consider, not giving answers.
Another issue is that it doesn’t have discernment. It is not led by the Spirit. It doesn’t care about you, really. It doesn’t pray for you. It doesn’t grow to know and love you over time. There is no meeting of hearts because it doesn’t have a heart. In short, it’s not a person. But it acts like one. The risk is that the ease and convenience and anonymity and “insightful” responses may be enough to lead people to choose it over interaction with another human. That may not be a good thing.
On the other hand, I know someone who does not have a spiritual director—the inconvenience, awkwardness of getting started, and cost are too much of a barrier. If that person would interact with Journee, would that be better than nothing?
I have a spiritual director, but we do not meet frequently enough for me to ask about the difficulties or questions that may arise each day. Is there value in more frequent reflection like this?
Framing seems to be a key consideration. The LLM has been instructed to act as a spiritual director, and this may leave false impressions with the user. Perhaps it should not be instructed to pretend to be a person, but something else. As an interactive journal, it could be instructed to frame its answers as suggestions for things to consider and journal about, not as advice. Or maybe there is some other framing that would be clearer.
If you are willing, please give Journee a try. What do you think—is it useful? Should it have a name? Should it be framed differently? Is it helpful to think and write about your prayer every so often? Does this interaction with a bot leave you with less time and ability to interact with other people? Send me your feedback.