Seminar Group 4

Academic Writing (AW)

Andy Weeger

Neu-Ulm University of Applied Sciences

November 30, 2025

Motivation

Introduction sets expectations for entire thesis. A good introduction promises: ’here is an important problem, here is how I will solve it, and here is the new knowledge the research will produce.’

If these points are unclear, readers disengage—even if your methods and findings are excellent.

And: a strong introduction makes the rest of your writing easier.

Goal

From good to great — perfecting your thesis introductions.

To demonstrate areas for improvement, we will use real examples from your cohort.

Structure

The 5-paragraph formula

Every introduction needs:

  1. Hook - Why this topic matters now (context)
  2. Background - What we know from literature (synthesis)
  3. Tension - What’s missing or unresolved (gap/problem)
  4. Resolution - Your approach to address it (RQ, theory, method)
  5. Contribution - Expected value of your work (new knowledge)

Key principle: Each element gets its own paragraph(s)

Overview

Most introductions follow the formula,
with some doing so particularly well.

Several introductions demonstrated:

  • Strong hooks with contemporary relevance and urgency
  • Good background synthesis positioning research in current discourse
  • Clear tension statements identifying specific gaps
  • Appropriate theory selection for research context
  • Methodological alignment between RQ and design

While considering the strengths, following patterns have been spotted:

  • Theory mentioned but not deeply integrated (theoretical embedding)
  • Concepts are too vague and/or scope is too broad (research question operationalisation)
  • Too generic or unclear theoretical advancement (contribution statements)

Theoretical embedding

Clarification

Strong theoretical embedding means:

  • Your research is grounded in and guided by established theory
  • Theory shapes your research question, design, and interpretation
  • You explain how the theory applies to your specific context
  • You show what’s new or different about your application

Not just I will use Theory X but rather Theory X suggests Y relationship, but in context Z this may work differently because …

Example #1

Improved version:

Cognitive Offloading Theory (Risko & Gilbert, 2016) posits that individuals shift cognitive work to external tools when doing so reduces mental effort without compromising task performance. However, this theory was developed in contexts where tools provide static support (calculators, notes). AI advisory systems differ fundamentally: they actively generate recommendations and evolve through interaction, potentially creating dependency rather than augmentation.

I extend Cognitive Offloading Theory by integrating Self-Efficacy Theory (Bandura, 1977) to explain a critical gap: when does offloading become maladaptive? I propose that repeated AI reliance for consequential decisions creates a feedback loop: offloading reduces practice, practice reduction lowers self-efficacy, low self-efficacy increases anxiety when AI is unavailable, anxiety further increases AI dependence. This theoretical integration predicts that AI reliance effects vary by decision stakes and frequency—a boundary condition not examined in either theory alone.

Example #2

Further improvement:

While TTF has been applied to enterprise technology adoption, it has not been used to analyze within-process heterogeneity —how different stages of a single business process (recruitment) demand different technological capabilities. This extension allows us to explain why AI may excel at screening but underperform in interviewing, advancing TTF beyond uniform technology-process assumptions.

Example #3

Improved approach2:

Identity Theory (Stets & Serpe, 2013) provides the primary explanatory framework: cybersecurity incidents threaten IT professionals’ identity as “competent defenders,” triggering identity verification processes. I draw on three complementary perspectives to explain contextual variations:. Job Demands-Resources Theory explains when organizational support buffers identity threat, and Sensemaking Theory guides analysis of how professionals reconstruct threatened identities post-incident. This integration explains both universal identity-stress mechanisms and organizational contingencies that moderate outcomes.

Recommendations

To strengthen theoretical embedding:

  1. Choose theory deliberately: Explain why this theory (not just what it says)
  2. Show application: How does theory explain your phenomenon? What mechanisms?
  3. Identify gaps or extensions: What’s new about applying theory to your context?
  4. Integrate multiple theories carefully: Specify primary vs. auxiliary roles
  5. Derive expectations: Theory should lead to testable predictions or analytical framework

Ask yourself: Could I conduct this study without the theory? If yes, your embedding is too weak.

Research questions

Operationalization

Well-operationalized research questions have:

  • Aligned with theory (concepts map to theoretical constructs)
  • Clear concepts (every key term is defined or definable)
  • Appropriate scope (answerable in one thesis with available resources)
  • Specified context (population, setting, and boundaries stated)
  • Measurable or observable (it is clear how you’d know you’ve answered it)

Example #1

Operationalized version:

RQ1: Which digital infrastructure providers support the 50 most-visited AI nudification and deepfake websites with which services (hosting, payment, DNS, advertising)? RQ2: How do current EU regulatory mechanisms address infrastructural enablers?

Example #2

Choose one RQ and commit:

How does professional identity shape IT professionals’ emotional and cognitive responses during and after cybersecurity incidents?

or

To what extent does professional identity strength moderate the relationship between cybersecurity incident severity and anxiety as long-term psychological outcomes, and how does organizational support buffer these effects?

Example #3

Operationalized version:

To what extent do security vulnerabilities differ between LLM-generated and developer-written code in the SecurityEval dataset, when categorized by CWE type and severity?

And (for evaluation):

How does the proposed security assessment framework perform in detecting and mitigating vulnerabilities compared to existing static analysis tools, as measured by precision, recall, F1-score, and developer-perceived usefulness?”

Frame artifact creation (objective 2) as contribution, not as research question.

Example #4

Suggestion

RQ1: Under what conditions3 do LLMs reproduce content from coordinated propaganda networks in their outputs, and how does reproduction frequency vary across these conditions?

RQ2: What safeguards4 show potential to reduce propaganda reproduction?

Each RQ is independently answerable with clear success criteria

Recommendations

Test RQ operationalization with these questions:

  1. Concept clarity: Can you define every key term specifically?
  2. Measurability: What exactly would you measure/observe/analyze?
  3. Scope: Could you complete this in one thesis with available data/access?
  4. Outcomes: What would “answering” this question look like?
  5. Single focus: Is this actually 2+ questions disguised as one?

Common fixes:

  • Split compound RQs into separate sub-questions
  • Replace vague verbs (“influence,” “affect”) with specific relationships (“increase,” “moderate,” “mediate”)
  • Add operational definitions for novel concepts

Contributions

Guidelines

Strong contribution statements:

  • Name specific theories being extended, challenged, or integrated
  • Identify concrete deliverables (what artifact/model/framework will exist?)
  • Specify novel constructs or relationships being introduced
  • Differentiate theoretical and practical contributions explicitly
  • Show advancement (what can we know/do afterward that we couldn’t before?)

Example #1

More specific version:

This study extends technology-facilitated sexual violence (TFSV) research by introducing the concept “abuse-enabling infrastructure”—the technical and economic systems (hosting, payment, advertising) that mediate between abusive actors and victims.

Drawing on socio-technical systems theory (Trist & Bamforth, 1951), I argue that intervention must target systemic interdependencies rather than individual platforms. Specifically, I demonstrate that payment gatekeeping and hosting accountability function as structural bottlenecks capable of disrupting abuse networks— challenging the current regulatory focus on content moderation alone.

This reframes TFSV from a behavioral problem requiring victim reporting to an infrastructural problem requiring systems-level governance, advancing theory by identifying previously unexamined leverage points for intervention.

Example #2

Enhanced version:

This study extends Information Foraging Theory (IFT) from individual cognition to adversarial information environments. I introduce the concept of “information patch poisoning”: deliberate manipulation of cue density and relevance signals that IFT-driven systems use for retrieval decisions.

The study contributes a theoretically-grounded vulnerability taxonomy identifying three exploitation vectors: (1) cue salience manipulation, (2) source authority spoofing, and (3) retrieval pathway optimization. This framework predicts how grooming effectiveness varies by query type and search mechanism—advancing IFT into contested information spaces.

The research evaluates five safeguard strategies (source diversity requirements, temporal verification, cross-reference validation) against the theoretical taxonomy, delivering evidence-based design principles for LLM developers and red-teaming protocols for information integrity auditors.

Recommendations

Make contributions specific and concrete:

  1. Name your theories: Which specific theory are you extending?
  2. Identify deliverables: What tangible artifact/framework/model will exist?
  3. Specify novel elements: What construct/relationship/mechanism is new?
  4. Show advancement: What becomes possible afterward?
  5. Differentiate types: Theory contribution ≠ practical contribution ≠ methodological contribution

Use the specificity test:

  • What exactly will exist after this study? (name it)
  • Which theory are you extending? (must name specific theory)
  • What new construct/relationship? (must be nameable and definable)

Language and style

Summary

Key takeaways

  • Theoretical embedding requires showing how theory explains your phenomenon, not just citing it
  • Multiple theories need explicit integration model—specify primary vs. auxiliary roles
  • Research questions must be operationalized: clear concepts, measurable, appropriate scope
  • Contributions need specificity: name theories, deliverables, and novel elements
  • Avoid generic language: “extends literature”, instead specify theory and contribution
  • Objectives ≠ research questions: “Compare X” is what you do, not what you ask

Final advice

  • Start specific, not general: Easier to broaden than narrow during revision
  • Make theory work for you: Choose theories that genuinely explain your phenomenon
  • Write testable: Can someone design a study from your RQ? If not, add precision
  • Name everything: Contributions, constructs, frameworks—unnamed elements feel vague
  • Read aloud: If you stumble explaining theory application, so will your reader

Strong introductions require multiple revisions—every pass should increase precision!

Q&A

Find slides here:

(only for your personal use, do not share)

Footnotes

  1. Person–job fit

  2. Use one primary theory + specify auxiliary roles

  3. prompt framing, query type, real-time search activation

  4. prompt filtering, source verification, output monitoring