← All clusters

Design & LLM Interaction

Research on how users interact with and direct AI systems through prompting, covering practical applications, collaborative writing, summarization, and decision support. Studies how intent, reasoning, and domain expertise shape effective human-AI workflows.

38 notes (primary) · 217 papers · 7 sub-topics
View as

Workplace Applications

4 notes

Why does AI default to coaching instead of doing?

In workplace conversations, users often want AI to execute tasks like writing or gathering information, but AI tends to explain and advise instead. What drives this systematic mismatch between what users need and what AI provides?

Explore related Read →

Do LLM research ideas actually hold up when experts try to execute them?

Explores whether LLM-generated ideas maintain their apparent novelty advantage when expert researchers spend 100+ hours implementing them. Matters because ideation-stage evaluation may not capture real-world feasibility barriers.

Explore related Read →

Does concentrated AI exposure enable workers to adapt and reallocate?

When AI displaces specific tasks rather than spreading across many, workers may shift effort to non-displaced tasks within their occupation. Does this reallocation mechanism actually offset employment losses?

Explore related Read →

What happens to human wages in an AGI economy?

Does human labor retain economic value when AGI can replicate most work? This explores whether wages would reflect the computational cost of replacement rather than the value workers actually produce.

Explore related Read →

Prompts and Prompting

2 notes

Does iterative prompt engineering undermine scientific validity?

When researchers repeatedly adjust prompts to get desired outputs, does this practice introduce hidden bias and produce unreplicable results? The question matters because LLM-based research is proliferating without clear methodological safeguards.

Explore related Read →

Can we measure prompt quality independent of model outputs?

This explores whether prompt quality has measurable, learnable dimensions beyond intuition. The research asks if prompts can be evaluated by their communicative, cognitive, and instructional properties rather than by their results.

Explore related Read →

Visual and GUI Agents

2 notes

Do text-based GUI agents actually work in the real world?

Can language-only agents that rely on HTML or accessibility trees handle actual user interfaces without structured metadata? This matters because deployed systems face visual screenshots, not oracle data.

Explore related Read →

Where do vibe coding students actually spend their debugging time?

When novices use AI coding tools, do they engage with the code itself, or do they primarily test the prototype? Understanding where students focus reveals how AI-assisted coding shapes learning behavior.

Explore related Read →

Co-Writing and Collaboration

1 note

AI in Education

1 note