Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
Thanks to AI, cheating in professional military education is becoming pervasive. Drawing on our perspectives as a uniformed instructor and as a recent course graduate, we’re seeing officers increasingly outsource their thinking and assignments to sophisticated AI tools despite attempted restrictions. The rising, unauthorized use of AI is not merely an integrity issue. It undermines the very mission of professional military education and erodes the military’s professional ethos.
However, AI cheating is not the problem. Rather, it is a symptom of disruption in a system comprised of many complex, interconnected parts. The debate in and beyond these pages over the role of AI in military classrooms offers many thoughtful insights but falls short of outlining a comprehensive, actionable path forward. Our goal is to share an approach for integrating AI into professional military education that not only promotes its utility, as senior officials urge, but also confronts the tensions, like academic dishonesty, that AI exacerbates. We apply a familiar problem-solving methodology called “design,” to explain how AI integration is fundamentally a systems challenge requiring institutional overhaul.
A Systemic Problem
James Lacey calls for educators to fully embrace AI or risk obsolescence, rationalizing that AI can dramatically improve performance when properly integrated. While these are valid points, Lacey’s approach is shortsighted. For one, he dismisses legitimate concerns that while some faculty and students gain purchase from the AI tools he enthusiastically endorses, others precariously offload their thinking and their judgment, as a growing body of research and trends throughout academia suggest. Further, Lacey treats rising academic misconduct among honor-bound military professionals with a passing glance despite acknowledging “well over half” his students are using AI even with attempted restrictions, namely for assessments. Policies vary by institution, but the oscillations between outright bans and conditional permissions are creating confusion.
Responding to Lacey, Matt Woessner challenges the notion that professional military education faces a binary choice to embrace AI or maintain the status quo. Woessner suggests “a middle ground,” including helpful ways to address officers’ growing dependence on AI by having them study its weaknesses, such as awareness of the “programmer’s invisible hand.” He frames the dilemma facing the military’s educational institutions more holistically, asking: “not whether they should embrace this new technology, but how to do so in a way that prepares their students for the future.” This is indeed the right question, but Woessner’s prescriptions require more detail.
While Woessner promotes AI classroom engagement to a degree, he suggests more reliance on “AI-free assessment tools,” like oral exams as checkpoints to verify learning and to create incentives for genuine engagement. This is supposedly because new technology has compromised the reliability of traditional assessments, including written essays and take-home exams. However, his approach treats AI integration primarily as a measurement issue — how to verify learning despite AI availability — rather than as a comprehensive environmental design challenge. One that equally prizes core competence with the benefits of human-AI teaming: the very issue Lacey pinpoints. Lacey responded with a biting critique of Woessner’s reasoning, doubling down on an “all-AI, all-the-time” approach. Woessner followed suit, reasserting his call for a “middle way.” Their debate should be essential reading for those with stake in these matters, including clients of the military’s educational system such as ourselves.
Despite their discerning contributions, both authors overlook the root causes of a deeper systemic problem and thus offer piecemeal solutions. For instance, Lacey’s unreserved endorsement downplays the risks involved with AI use that even its developers don’t fully understand and some researchers gravely fear. While Woessner’s AI-free assessments value authenticity, they demote the mastery of technologies that not only proliferate the operating environment but may soon revolutionize warfare.
Rather than applying patchwork fixes that aggravate intersecting tensions, the architects of professional military education should refer to problem-solving methodologies that have long been the cornerstone of the curricula. One methodology known as design not only offered us a starting point for unpacking the root causes of AI cheating, but more importantly led us to generate a roadmap for integrating AI into military classrooms that accounts for the interactions within a complex system.
Applying Design
Design is ideal for this scenario because it is a conceptual framework that serves as the basis for more detailed planning, which each institution will ultimately require based on its unique requirements, as Woessner emphasizes. Design promotes critical thinking, creative thinking, and systems thinking to understand strategic direction and guidance, to understand the environment, and to define the problem. The main output of design is a broad solution known as an operational approach.
The strategic direction and guidance for integrating AI into military classrooms are marked by a resolve to adapt. Pledges to rapidly integrate AI across the force coincide with a comprehensive review of all military education and training based on the White House’s AI policy. As adversaries use AI to streamline command processes, optimize battlefield logistics, and even place important decisions in the hands of algorithms, senior leaders make it clear that professional military education institutions cannot afford to remain static or apply haphazard measures for AI integration.
What’s clear is that the institutions should reform to better prepare military professionals for future operating environments co-inhabited (and perhaps dominated) by AI, while retaining the ability to think, act, and lead effectively without it. This dual requirement is critical. Future battlefields will likely feature a contested electromagnetic spectrum where AI tools become unavailable precisely when we need them most. Benjamin Jensen calls for transforming military schoolhouses into small, elite AI battle labs, thereby creating a niche cadre of “enlightened soldiers.” While creative, Jensen’s approach would fail to prepare the wider force for future warfare. Still, his prescriptions point to a major tension: AI is creating dysfunction by exposing vulnerabilities of a system designed for a bygone era.
Integrating AI into professional military education faces obstacles driven by several interconnected, root causes. Lacey implies that most civilian educators lack technical competence and the will to incorporate AI tools in ways that force officers to practice their judgment, exposing a gap between the educators responsible for delivering the curriculum and desired learning outcomes. Woessner’s observation that students show “strange deference” to AI — questioning classmates but accepting AI pronouncements — identifies a cognitive vulnerability. While Jensen pinpoints how the military’s enthusiasm for AI is confronting a lack of foundational preparation to integrate it effectively.
Building on these insights, we identify two additional root causes from a behavioral economics lens. First, military culture quietly tolerates unethical behavior although military professionals seldom admit it, according to Leonard Wong and Stephen J. Gerras. Their 2015 study, Lying to Ourselves: Dishonesty in the Army Profession, remains relevant because AI misuse is exacerbating the military’s ethical blind spots. Wong and Gerras argue that officers have become desensitized from a “deluge of demands” coupled with a constant need to verify compliance by putting their honor on the line. Their observations explain why signing AI policy pledges does little to promote ethical behavior, because the incentive is misplaced. In other words, the system rewards compliance over honor and desired behaviors like pursuing knowledge.
Second, cheating has never been so convenient because AI tools can quickly and effortlessly produce academic work. All academia is wrestling with this issue, and professional military education is not immune. Despite policies restricting AI use on assessments, we’re seeing officers increasingly use AI to complete their assignments, from essays to theses, producing arguments that appear watertight but, upon closer inspection, display telltale anomalies like bizarre reasoning, incorrect citations, and factual inaccuracies. When questioned, several students openly divulge generating papers or answering exam questions with AI platforms, either partially or entirely. Some have even admitted to spending more time deceiving so-called online AI detectors rather than composing original thoughts.
Empirical evidence is limited since AI cheating is difficult to prove, but ample research in behavioral economics explains why it is becoming pervasive. Humans naturally display “bounded ethicality,” which is a predictable gap between the ethical choices we want to make and the choices we actually make. “Ethical fading” is a condition where individuals facing an ethical dilemma become so focused on self-interest that ethical considerations no longer seem relevant. The convenience and anonymity of AI tools, combined with heavy workloads and other pressures, make students (regardless of their status as professionals) highly susceptible to unethical shortcuts.
The problem confronting professional military education comes down to this: how to integrate AI in ways that promote desired behaviors while achieving course objectives. Piecemeal solutions are insufficient and the current approach of peripheral reforms — such as tweaking a 10-year-old slide deck for a lesson or suddenly introducing a new AI agent during a practicum — is inadequate. The entire system of professional military education must change.
Institutional Overhaul
The operational approach we recommend amounts to institutional overhaul. We are not suggesting overturning decades of effective pedagogy and tradition, but professional military education should reform considerably given AI’s vast disruption. Our operational approach features three lines of effort for addressing the root causes listed above, pulling from the best elements of others’ proposals.
Human-AI Curricula
Rebase the curricula on Lacey’s powerful concept of human-AI teaming, which should serve as the central pillar of institutional overhaul given its downstream impacts. Individuals and groups who master the human-AI combination will exponentially outperform those who rely exclusively on either human or AI capabilities. The goal is not humans working alone, nor AI working alone, but humans learning to effectively combine their thinking and judgment with AI assistance in ways that compound over time through repeated practice. As Lacey observes, “we are rapidly entering an education environment where only those who master human-AI teaming are likely to survive.” He’s right about the imperative, though we diverge on the method: mastering this teaming requires deliberate practice in both AI-enabled and AI-restricted environments, not the AI-saturated instruction he advocates.
Institutions should embrace the “invisible hand” by tailoring AI companions, or intelligence augmentation systems, for students and faculty alike. This shift would provide students with personalized learning experiences, potentially replacing traditional homework and need for lectures. The utility of intelligence augmentation is virtually limitless. Many students are already applying this idea in practice: using large language models as interactive research assistants, Socratic counterparts for debate, and summarizers of dense texts. Faculty can use AI to quickly turn existing materials into more practical, engaging lessons, reducing the time spent in class teaching new information. These are smart uses of AI because they create efficiencies and foster genuine engagement, unlike a student outsourcing an essay to a chatbot or an educator hypocritically sharing algorithmic feedback as authentic.
Fielding these tools will be resource intensive and logistically complex. Detailed planning is critical to match resources with requirements. There are also risks of forming dependence on AI companions, which is more reason for mixing in AI-restricted environments. Educators would benefit from attending communities of practice where they can regularly share their successes, failures, and concerns about AI in the classroom. Further, institutions should retain faculty who offer expertise and authentic connection and mentorship. “Far from being driven into extinction,” Woessner states, “only humans have the capacity to teach students to evaluate strategic problems independently, thereby instilling the requisite skepticism needed to make effective human–machine collaboration possible.” The human educator may be more important than ever before, guarding students from handing their agency over to a machine.
Behavioral Incentives
Apply behavioral economics principles that nudge all stakeholder groups toward desired outcomes. This is the critical missing piece in other approaches. Policies should favor permission and limit restrictions and should address the misalignment of incentives by targeting extrinsic and intrinsic motivations. Doing the work should be the point, not simply complying with requirements. Three examples of behavioral interventions include rebranding development as “occupational training,” an idea inspired by reducing excessive academic workloads, and designing AI tools users are motivated to use.
First, enhance AI skills through ongoing, personalized occupational training for all. Augment sporadic faculty development assemblies with the same human-AI teaming concept. Reward exemplary progress with popular incentives like time-off awards or paid bonuses for civilian instructors. Use data from these activities for uniformed instructor evaluations by assessing potential to thrive with advanced technologies. Likewise, reward students, through grades or academic awards, who demonstrate mastery. To avoid “gaming the system,” assess skill in application, during classroom observation of an instructor for instance, rather than simply reward completing modules. Hosting “what’s possible?” workshops or offering sandboxes for safe failure will allow experimenting with AI without fear of breaking something. Still, policies should clearly explain the penalties of noncompliance and be willing to enforce them.
Second, scale student workload at home, namely by reducing assigned readings. Assigning hours of nightly reading is a time-worn practice, but behavioral studies show that overload can be counterproductive. Not only is there a positive correlation between high academic stress and AI-assisted cheating, but also students tend to reduce engagement with dense reading lists as courses progress. While students may forgo interacting meaningfully with an abundance of original texts, educators should meet students where they are. Viewing reading assignments as steppingstones to human-AI dialogue and in-class collaboration, rather than a comprehensive treatment of lesson material, will maximize engagement with selected works. Normalizing this approach can be done over time as the system adapts.
Third, design AI tools that avoid shortcuts and showcase developmental utility. For example, having students compose and submit an essay via a schoolhouse AI program, rather than simply uploading a file to an online repository, would curb dishonest behavior and support self-development by recording writing process analytics and offering tips. Access is critical, too. The fact that familiar AI tools are now available on government networks is a positive step, but the institution’s AI programs should be readily accessible to everyone, including hundreds of international students. A poorly designed agent could provide unhelpful or even incorrect advice, so it’s important that educators have the final say on grades and continue to offer their original critiques.
Assessments
Many traditional assessments are vulnerable to AI cheating, but that doesn’t mean they have lost their value. Composing original thoughts for an essay remains a powerful way to promote critical thinking, but ensuring students derive the intended benefit depends on the design. Administrators should assume students will use AI regardless of any stated restrictions absent controlled environments like proctored exam rooms. Assessments should be designed accordingly, including a combination of “AI-proof” and “AI-infused” assessments rather than simply “AI-free” or “AI-permitted.”
AI-proof assessments require classroom context, personal experience, and applied judgment that AI cannot replicate without human expertise. Rather than analyzing well-known historical events or cases, which AI can easily generate, create scenarios requiring personal context that AI cannot access, thus making the assessment AI proof. Asking students to analyze a real-time, unfolding situation and immediately present their findings in a live discussion or role-playing scenario will encourage constructive dialogue. AI-proof assessments will be difficult to scale, especially for time-intensive oral exams that Woessner endorses, but there are creative ways to reduce these burdens such as simulating desk-side briefings to a senior leader in small groups. In these cases, AI remains a tool for formulation rather than a substitute for presence.
AI-infused assessments can evaluate students’ ability to effectively combine human judgment with AI assistance, while developing and assessing the necessary technical literacy at the same time. Traditional assessments can be modified so AI supplements students’ work rather than replacing it. We’ve seen prototypes of AI-infused assessments that hold promise, such as an AI agent designed to support course of action development during an operational planning practicum. The agent served as an interactive medium for students to explore ideas, gain insights, and test assumptions and later provided personalized feedback on how well users leveraged the tool’s potential according to a rubric.
To reduce compliance-driven motivation, interactions should encourage students’ self-determination and guard against sycophancy. Here’s an example of a Socratic agent conducting a check on learning: “Instead of ‘tell me what you know then I’ll grade your response,’ we could start with genuine questions YOU have about joint operations. What puzzles you? What seems contradictory?” Rather than fill-in-the-blank or multiple choice, this exploratory model is more relatable and genuine, sparking interactions that emerge from intellectual curiosity over rote memorization. Likewise, it’s important to design tools that don’t provide over-flattery responses but the honest, constructive feedback military professionals need for real development.
Conclusion
Future wars may be determined by the military that best integrates AI across its formations, and this integration starts with professional military education. The goal is not simply to flood AI into the military’s classrooms, nor should institutions promote AI skepticism so sternly that it turns people away. Graduates must comprehend the capabilities and limitations of AI and know how to apply these tools wisely. As importantly, students must continue developing their cognitive skills, mature their judgment, build multi-discipline competence, and strengthen their ethical foundation during their educational journeys.
Institutional overhaul lays a foundation for comprehensive reform that architects of professional military education should consider as the basis for their detailed plans. New curricula based on human-AI teaming should be the top priority as it will shape accompanying behavioral incentives and assessment requirements. Because unique needs of each institution will vary, we expect our proposals may succeed in some cases or fall short in others or even yield unforeseen results. This is true of any plan, so it’s important to measure progress, assess risk, and adapt accordingly.
We did not write this piece to blow the whistle on AI cheating, but to demonstrate how the time-honored problem-solving methodologies taught in military classrooms can yield comprehensive solutions. Design methodology reveals that AI integration is not a binary choice between prohibition and unrestricted access, nor is it solely about technology, cognitive development, or even ethics. It is a systems challenge requiring institutional overhaul for a new era.
Tim Devine is a U.S. Army officer in the strategist career field. He is currently serving as an instructor for Army professional military education and is a member of the Military Writers Guild.
Todd Graham is a U.S. Army infantry officer currently serving as an operations officer in the 82nd Airborne Division. He is a recent graduate of Army professional military education.
The views in this article are the authors’ and do not represent the policies or positions of the U.S. Army, the Department of Defense, or any part of the U.S. government.
**Please note, as a matter of house style, War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.
Image: Petty Officer 1st Class Brian Glunt via DVIDS