← All notes

How do you build domain expertise into general AI models?

How LLMs adapt to specific domains, the methods that work, and the hidden costs of each approach.

Topic Hub · 30 linked notes · 12 sections
View as

Sub-Topic Maps

2 notes

How do you add domain expertise without losing general reasoning?

Exploring the tension between injecting specialized knowledge and preserving a model's broad problem-solving ability. Five distinct approaches exist, each with different trade-offs in cost, flexibility, and reliability.

Explore related Read →

What breaks when specialized AI models reach real users?

When domain-specific AI systems move from research to production, deployment patterns, routing decisions, and interface design all shape whether users can actually complete tasks. Understanding these friction points reveals where specialized models fail in practice.

Explore related Read →

Writing Angles

1 note

Co-Writing, Creativity, and Academic Integrity

7 notes

How do writers use AI through different creative stages?

This study explores whether writers deploy large language models differently depending on their creative needs—from generating initial ideas to organizing thoughts to drafting final text. Understanding these patterns reveals how humans and AI can complement each other's strengths.

Explore related Read →

Can AI generate hundreds of fake academic papers automatically?

Explores whether language models can industrialize academic fraud by retroactively constructing theoretical justifications for data-mined patterns, complete with fabricated citations and creative signal names.

Explore related Read →

Can structured pipelines make LLM novelty assessment reliable?

Explores whether breaking novelty assessment into extraction, retrieval, and comparison stages helps LLMs align with human peer reviewers and produce more rigorous, evidence-based evaluations.

Explore related Read →

Can specialized agents write better scientific papers than single models?

Multi-agent frameworks decompose writing into specialized subtasks. This explores whether distributed agents maintaining cross-document consistency outperform single-model approaches on manuscript quality and literature synthesis.

Explore related Read →

Does AI assistance weaken our brain's ability to think independently?

Can using language models for cognitive tasks reduce neural connectivity and learning capacity? New EEG evidence tracks how external AI support may systematically degrade our cognitive networks over time.

Explore related Read →

Does AI assistance actually harm the way developers learn?

When developers use AI tools while learning new programming concepts, does it impair their ability to understand code, debug problems, and build lasting skills? Understanding this matters for how we deploy AI in education and training.

Explore related Read →

Does AI separate intellectual form from the thinking behind it?

Exploring whether AI's ability to generate polished intellectual products without the underlying reasoning process represents a genuinely new kind of decoupling, and what that means for how we evaluate knowledge.

Explore related Read →

Decision Support and Thinking Assistants

1 note

Expertise Transformation

3 notes

Does AI reshape expert work into knowledge management?

As AI generates knowledge at scale, does expert work shift from creating new understanding to curating and validating machine outputs? This matters because curation and creation demand different cognitive skills.

Explore related Read →

How does LLM-mediated search change what expertise requires?

When experts search through LLMs instead of traditional inquiry, do they need fundamentally different skills? This explores whether domain knowledge alone is enough when the search itself operates on statistical patterns rather than meaningful questions.

Explore related Read →

Does polished AI output trick audiences into trusting it?

When AI generates professional-looking graphs, diagrams, and presentations, do audiences mistake visual polish for analytical depth? This matters because appearance might substitute for actual expertise.

Explore related Read →

Competence Misattribution

2 notes

Do AI-assisted outputs fool users about their own skills?

When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.

Explore related Read →

Should we treat LLM outputs as real empirical data?

Can synthetic text generated by language models serve as evidence in the same way observations from the world do? This matters because researchers increasingly rely on AI-generated content without accounting for its fundamentally different epistemic status.

Explore related Read →

Prompt Science and Literacy

3 notes

Can we measure prompt quality independent of model outputs?

This explores whether prompt quality has measurable, learnable dimensions beyond intuition. The research asks if prompts can be evaluated by their communicative, cognitive, and instructional properties rather than by their results.

Explore related Read →

Does iterative prompt engineering undermine scientific validity?

When researchers repeatedly adjust prompts to get desired outputs, does this practice introduce hidden bias and produce unreplicable results? The question matters because LLM-based research is proliferating without clear methodological safeguards.

Explore related Read →

Do popular prompting techniques actually improve model performance?

Five widely-cited prompting methods (chain-of-thought, emotion prompting, sandbagging, and others) are tested across multiple models and benchmarks to see if their reported improvements hold up under rigorous statistical analysis.

Explore related Read →

Workplace AI Impact and Labor Economics

5 notes

Do LLM research ideas actually hold up when experts try to execute them?

Explores whether LLM-generated ideas maintain their apparent novelty advantage when expert researchers spend 100+ hours implementing them. Matters because ideation-stage evaluation may not capture real-world feasibility barriers.

Explore related Read →

What collaboration level do workers actually want with AI?

Explores whether workers prefer full automation, equal partnership, or continuous human control across different tasks. Understanding worker preferences could reshape how organizations deploy AI systems.

Explore related Read →

Why does AI default to coaching instead of doing?

In workplace conversations, users often want AI to execute tasks like writing or gathering information, but AI tends to explain and advise instead. What drives this systematic mismatch between what users need and what AI provides?

Explore related Read →

Does concentrated AI exposure enable workers to adapt and reallocate?

When AI displaces specific tasks rather than spreading across many, workers may shift effort to non-displaced tasks within their occupation. Does this reallocation mechanism actually offset employment losses?

Explore related Read →

What happens to human wages in an AGI economy?

Does human labor retain economic value when AGI can replicate most work? This explores whether wages would reflect the computational cost of replacement rather than the value workers actually produce.

Explore related Read →

Production Deployment

1 note

Domain-Specific Measurement

1 note