Journal
April 10
- Added timed and marathon modes to visual art section
- Added thirty questions to visual art question bank
- Refined backend
- Added exit survey (so I can connect factors like age, education background, and AI familiarity with their in-game performance)
April 3
- Changed UI to make everything sharp edges
- Integrated Google Sheets backend
- New game mockup: partygame-ten.vercel.app — a party game where players respond to personal prompts while an AI secretly submits its own response among them. Players then read, discuss, and vote to identify which answer was written by the AI.
- Deployed this journal
- Made timer UI stand out more
- Added 'review questions' section
- Improved UI ('correct') page to look more like NYT article — fewer clicks
- Added favicon
March 27
- Generated new literature section questions — it feels quite difficult to tell which is human, other than spotting aloof signs of AI writing, such as phrases that feel deep but actually don't mean much (e.g. “sound that carried the weight of a thousand forgotten nights,” “like the bars of some luminous cage,” “The desert stretched before them like a page on which nothing had yet been written”) and at times the classic antithesis (it's not this, but this) and rule of three. But generally it was very difficult to spot the human writing unless I had read the book before.
- Changed layout of 6-point scale to look nicer
- New direction for music research: instrumental music — as music with lyrics treads into the territory of identifying AI-generated text; found copyright-free performances of musical artists
- It's interesting to think about: would this test just come down to how 'well-educated' — previously exposed to the work of these 'masters' — the user is?
- Built out timed mode and marathon mode
- Marathon mode is practically impossible if the start time is 15s — it takes at least 30s to even finish reading the texts. So I will tweak the game rules: start time 1 min, (60−2n) additional seconds for each correct answer
March 20
- Added a 'visual art' section — it might be a bit too easy to discern AI-generated music and video
- Refined research direction to: when AI imitates the masters, can you tell the difference? — because it's possible still for humans to create “slop”-feeling content, but with creators whom we've prescribed to be “masters,” what exactly makes their work special, and is it possible to have AI replicate work that makes the audience think and feel the same way as these masters have?
- Outlined three game modes to add more metrics for investigation, as well as to promote user engagement
- Experimented with new set of questions for the Literature section — but the AI outputs read too similar to the original writing. The structure of the paragraph felt similar and most of the AI's work seemed to lie in simply changing wordings. So these questions have been scrapped.
- Added 6-point scale — would be interesting to see the relationship between user certainty and the time it takes them to answer the question
March 13
- Locally running prototype
- Generated some questions for each section
March 2
- Decided to research: to what extent can we distinguish AI-generated content from human-created content?
- Completed wireframes and research schedule