tried making a tool that would let you map out intellectual/spiritual influences etc. lineage. but while I successfully had it fetching, Townie was unable to make an end-to-end system that would render the mermaid diagram once it was done fetching. and now it just doesn't work.
I still think the idea is cool but I'm abandoning it for now.
ok let's code up a tool that lets you put in N (~5-10) of your main influences (authors, spiritual teachers, etc) and then it generates a flowchart of your influences' main influences, going back a bunch of steps
it'll use an LLM to answer the question of who someone's main influences are, and for each it'll output a sentence or two describing the relationship (eg "X mentored Y in his youth" or "Jung was trained under Freud but then broke with him over a disagreement about...")
so the whole thing is kind of recursive. I guess it should also let you put in (for completion of showing your chart to others)
my impression is that claude react artifacts can now make calls to the anthropic api, so you should be able to make the whole thing happen. and then mermaid I guess for the flowchart
it should by default find your influences' top 3-5 influences then stop, then have a button to push to go one layer deeper.
you're going to need to make sure the API prompt gets Claude to respond in some sort of structured format so we can programmatically extract {name, relationshipToInfluencee}.
I think we should find a particular person who has a diversity of relationships, and give them as a real example as part of the prompt. you can probably pick a good one that has a mix of personal-relationship/mentorship and intellectual-readership influence. including like, hundreds of years apart. maybe Carl Jung
Notes:
- we want to render the mermaid live, not just output the mermaid markup
- we want to re-render the mermaid every time new data comes in. so before it even makes any LLM calls, you should see yourself and all of your influences
- for debugging, make it so that all of the LLM calls (prompt, output, parsed (or error)) is stored in a log