Seeing “Do not hallucinate.” included tells us a lot about AI in 2024.
I wonder why they write word "please", "please", "please". "Please limit the reply" "Please keep your summary" I think they don't know precisely how LLM works.
Doesn’t this repository constitute a copyright violation?
Suggestion for this repo: prompts embedded in JSON files are quite hard to read, especially in mobile browsers.
You could add some code (perhaps in a GitHub Action) that extracts the prompts out into accompanying Markdown files to fix this.
You’ve already done most of the work to create this file: https://github.com/Explosion-Scratch/apple-intelligence-prom...