Academic Writing (AW)
Neu-Ulm University of Applied Sciences
November 30, 2025
Introduction sets expectations for entire thesis. A good introduction promises: ’here is an important problem, here is how I will solve it, and here is the new knowledge the research will produce.’
If these points are unclear, readers disengage—even if your methods and findings are excellent.
And: a strong introduction makes the rest of your writing easier.
From good to great — perfecting your thesis introductions.
To demonstrate areas for improvement, we will use real examples from your cohort.
Every introduction needs:
Key principle: Each element gets its own paragraph(s)
Most introductions follow the formula,
with some doing so particularly well.
Several introductions demonstrated:
While considering the strengths, following patterns have been spotted:
Strong theoretical embedding means:
Not just I will use Theory X but rather Theory X suggests Y relationship, but in context Z this may work differently because …
This study draws on Cognitive Offloading Theory. It refers to the process by which individuals shift mental effort to external systems to offload the burden of mental work needed for a task (Risko and Gilbert 2016). With respect to AI, this means that instead of evaluating ideas themselves, individuals increasingly allow AI systems to do the thinking for them. While this can seem as helpful and efficient at first sight, researchers warn that this pattern may lead to reduced cognitive effort, sometimes described as cognitive laziness (Carr 2020), indicating that people may gradually stop using their own analytical abilities when external tools are at one’s fingertips. Over time, this reduced engagement with one’s own decision-making processes may lower decision self-efficacy (Bandura 1977), which then reinforced by Zhang et al. (2024), describes the belief in one’s ability to make decisions independently. Through self-efficacy theory, this study suggests that low self-efficacy will result in increased anxiety when an individual is forced to use his/her own judgment.
What’s missing here?
Decent theoretical embedding, but the relationship given the specifics of the AI technology could be explained more clearly.
Particularly the theoretical mechanism(s) linking AI reliance, offloading, self-efficacy, and anxiety.
Improved version:
Cognitive Offloading Theory (Risko & Gilbert, 2016) posits that individuals shift cognitive work to external tools when doing so reduces mental effort without compromising task performance. However, this theory was developed in contexts where tools provide static support (calculators, notes). AI advisory systems differ fundamentally: they actively generate recommendations and evolve through interaction, potentially creating dependency rather than augmentation.
I extend Cognitive Offloading Theory by integrating Self-Efficacy Theory (Bandura, 1977) to explain a critical gap: when does offloading become maladaptive? I propose that repeated AI reliance for consequential decisions creates a feedback loop: offloading reduces practice, practice reduction lowers self-efficacy, low self-efficacy increases anxiety when AI is unavailable, anxiety further increases AI dependence. This theoretical integration predicts that AI reliance effects vary by decision stakes and frequency—a boundary condition not examined in either theory alone.
To examine whether “the right capability” aligns with “the right task” at each step, the thesis adopts Task–Technology Fit (TTF) as the primary theoretical lens. TTF distinguishes task requirements from technology characteristics and posits that higher fit improves task performance and downstream outcomes (Goodhue & Thompson, 1995). Within this framing, sourcing, screening, assessment, interviewing and decision/offer are treated as distinct task environments with heterogeneous information-processing demands; the analysis evaluates how specific AI capabilities display stronger or weaker fit to these demands, with implications for matching accuracy and, ultimately, PJF1.
What works well here?
Strengths:
However: TTF is extremely difficult to assess (and not many want to read another lame TTF-study ;-).
Further improvement:
While TTF has been applied to enterprise technology adoption, it has not been used to analyze within-process heterogeneity —how different stages of a single business process (recruitment) demand different technological capabilities. This extension allows us to explain why AI may excel at screening but underperform in interviewing, advancing TTF beyond uniform technology-process assumptions.
The research aims to illuminate the interplay between identity, stress and recovery. While identity theory provides the main explanatory lens, complementary insights from stress, stereotype, psychological ownership, framing and sensemaking theory inform the broader conceptual framing.
What’s problematic here?
Improved approach2:
Identity Theory (Stets & Serpe, 2013) provides the primary explanatory framework: cybersecurity incidents threaten IT professionals’ identity as “competent defenders,” triggering identity verification processes. I draw on three complementary perspectives to explain contextual variations:. Job Demands-Resources Theory explains when organizational support buffers identity threat, and Sensemaking Theory guides analysis of how professionals reconstruct threatened identities post-incident. This integration explains both universal identity-stress mechanisms and organizational contingencies that moderate outcomes.
To strengthen theoretical embedding:
Ask yourself: Could I conduct this study without the theory? If yes, your embedding is too weak.
Well-operationalized research questions have:
How do digital platform infrastructures enable the growth and monetization of AI-generated intimate image abuse, and what regulatory or technical interventions could disrupt this ecosystem?
What needs operationalization?
Operationalized version:
RQ1: Which digital infrastructure providers support the 50 most-visited AI nudification and deepfake websites with which services (hosting, payment, DNS, advertising)? RQ2: How do current EU regulatory mechanisms address infrastructural enablers?
To resolve this issue, one of these possible research questions will be investigated:
How does IT identity shape the emotional and cognitive responses of IT professionals during and after significant cybersecurity incidents?
How do variations in IT identity strength and perceived organizational support influence IT professionals’ coping strategies and long-term psychological outcomes following such incidents?
What’s problematic about presenting alternatives?
BTW: I doubt that IT identity is the right concept here …
Choose one RQ and commit:
How does professional identity shape IT professionals’ emotional and cognitive responses during and after cybersecurity incidents?
or
To what extent does professional identity strength moderate the relationship between cybersecurity incident severity and anxiety as long-term psychological outcomes, and how does organizational support buffer these effects?
How can a security assessment framework evaluate vulnerabilities in LLM-generated code compared to developer-written code using the dataset SecurityEval?
Compare vulnerabilities in LLM-generated code with developer-written code using the SecurityEval dataset (Siddiq & da Silva Santos 2022).
Design and implement a security assessment framework that integrates vulnerability detection techniques with automated mitigation mechanisms, validated through quantitative metrics (e.g., precision, recall, F1-score) and qualitative developer feedback evaluated through the Technology Acceptance Model (TAM).
How do objectives differ from RQs?
Operationalized version:
To what extent do security vulnerabilities differ between LLM-generated and developer-written code in the SecurityEval dataset, when categorized by CWE type and severity?
And (for evaluation):
How does the proposed security assessment framework perform in detecting and mitigating vulnerabilities compared to existing static analysis tools, as measured by precision, recall, F1-score, and developer-perceived usefulness?”
Frame artifact creation (objective 2) as contribution, not as research question.
How and under what conditions does LLM grooming influence model retrieval and output behaviour, particularly through variations in prompt sensitivity, and what safeguards could mitigate such manipulation?
What could be improved?
Strengths
Suggestion
RQ1: Under what conditions3 do LLMs reproduce content from coordinated propaganda networks in their outputs, and how does reproduction frequency vary across these conditions?
RQ2: What safeguards4 show potential to reduce propaganda reproduction?
Each RQ is independently answerable with clear success criteria
Test RQ operationalization with these questions:
Common fixes:
Strong contribution statements:
Theoretically, it extends the literature on technology-facilitated sexual violence by incorporating insights from socio-technical systems theory and digital platform governance, reframing image-based abuse as an infrastructural rather than behavioral problem.
What’s too vague here?
More specific version:
This study extends technology-facilitated sexual violence (TFSV) research by introducing the concept “abuse-enabling infrastructure”—the technical and economic systems (hosting, payment, advertising) that mediate between abusive actors and victims.
Drawing on socio-technical systems theory (Trist & Bamforth, 1951), I argue that intervention must target systemic interdependencies rather than individual platforms. Specifically, I demonstrate that payment gatekeeping and hosting accountability function as structural bottlenecks capable of disrupting abuse networks— challenging the current regulatory focus on content moderation alone.
This reframes TFSV from a behavioral problem requiring victim reporting to an infrastructural problem requiring systems-level governance, advancing theory by identifying previously unexamined leverage points for intervention.
The expected contribution is twofold. First, it will provide an initial theoretical framework for understanding LLM grooming as a form of information operation targeting foundation models. Second, it will identify and evaluate practical safeguards that can strengthen the epistemic resilience of large-scale language systems.
What works? What could improve?
Strengths:
Room for improvement:
Enhanced version:
This study extends Information Foraging Theory (IFT) from individual cognition to adversarial information environments. I introduce the concept of “information patch poisoning”: deliberate manipulation of cue density and relevance signals that IFT-driven systems use for retrieval decisions.
The study contributes a theoretically-grounded vulnerability taxonomy identifying three exploitation vectors: (1) cue salience manipulation, (2) source authority spoofing, and (3) retrieval pathway optimization. This framework predicts how grooming effectiveness varies by query type and search mechanism—advancing IFT into contested information spaces.
The research evaluates five safeguard strategies (source diversity requirements, temporal verification, cross-reference validation) against the theoretical taxonomy, delivering evidence-based design principles for LLM developers and red-teaming protocols for information integrity auditors.
Make contributions specific and concrete:
Use the specificity test:
Strong introductions require multiple revisions—every pass should increase precision!
Find slides here:
(only for your personal use, do not share)
Person–job fit
Use one primary theory + specify auxiliary roles
prompt framing, query type, real-time search activation
prompt filtering, source verification, output monitoring