In the beginning was the Kriegsspiel
Kriegsspiel (lit: “war’s game”) was a system used for training officers in the Prussian and German armies. The first set of rules was created in 1812 and named Instructions for the Representation of Tactical Maneuvers under the Guise of a Wargame. It was originally produced and developed further by Lieutenant Georg Leopold von Reiswitz and his son Georg Heinrich Rudolf von Reiswitz of the Prussian Army.
Manual simulations have probably been in use in some form since mankind first went to war. Chess can be regarded as a form of military simulation (although its precise origins are debated). In more recent times, the forerunner of modern simulations was the Prussian game Kriegsspiel (war game),which appeared around 1811 and is sometimes credited with the Prussian victory in the Franco-Prussian War. It was distributed to each Prussian regiment and they were ordered to play it regularly, prompting a visiting German officer who declare in 1824 –
“It’s not a game at all! It’s training for war!”
von Reiswitz’s Design
von Reiswitz’s system for simulating war was initially based around a specially designed table which he created for his King Friedrich Wilhelm III. The table (see photos on page 65, 67, 69 and 70) divided the game field into a grid system, a core element of many later wargame and roleplaying systems, and included different pre-cast terrain types used in modular combinations, as well as making use of special gaming pieces and dice. von Reiswitz’ system also included the methods to simulate fog of war and communication difficulties, and a position of what he called a ‘confidant,’ an impartial third party calculating and assessing the moves, analogous to the modern gamemaster.
Military simulations, also known informally as war games, are simulations in which theories of warfare can be tested and refined without the need for actual hostilities. Many professional analysts object to the term wargames as this is generally taken to be referring to the civilian hobby, thus the preference for the term simulation.
Simulations exist in many different forms, with varying degrees of realism. In recent times, the scope of simulations has widened to include not only military but also political and social factors, which are seen as inextricably entwined in a realistic warfare model.
Whilst many governments make use of simulation, both individually and collaboratively, little is known about it outside professional circles. Yet modelling is often the means by which governments test and refine their military and political policies. Military simulations are seen as a useful way to develop tactical, strategical and doctrinal solutions, but critics argue that the conclusions drawn from such models are inherently flawed, due to the approximate nature of the models used.
The simulation spectrum
The term military simulation can cover a wide spectrum of activities, ranging from full-scale field-exercises, to abstract computerized models that can proceed with little or no human involvement – such as the Rand Strategy Assessment Center (RSAC).
As a general scientific principle, the most reliable data comes from actual observation and the most reliable theories depend on it. This also holds true in military analysis, where analysts look towards live field-exercises and trials as providing data likely to be realistic (depending on the realism of the exercise) and verifiable (it has been gathered by actual observation). One can readily discover, for example, how long it takes to construct a pontoon bridge under given conditions with given manpower, and this data can then generate norms for expected performance under similar conditions in the future, or serve to refine the bridge-building process. Any form of training can be regarded as a “simulation” in the strictest sense of the word (inasmuch as it simulates an operational environment); however, many if not most exercises take place not to test new ideas or models, but to provide the participants with the skills to operate within existing ones.
Full-scale military exercises, or even smaller-scale ones, are not always feasible or even desirable. Availability of resources, including money, is a significant factor — it costs a lot to release troops and materiel from any standing commitments, to transport them to a suitable location, and then to cover additional expenses such as petroleum, oil and lubricants (POL) usage, equipment maintenance, supplies and consumables replenishment and other items. In addition, certain warfare models do not lend themselves to verification using this realistic method. It might, for example, prove counter-productive to accurately test an attrition scenario by killing one’s own troops.
Moving away from the field exercise, it is often more convenient to test a theory by reducing the level of personnel involvement. Map exercises can be conducted involving senior officers and planners, but without the need to physically move around any troops. These retain some human input, and thus can still reflect to some extent the human imponderables that make warfare so challenging to model, with the advantage of reduced costs and increased accessibility.
A map exercise can also be conducted with far less forward planning than a full-scale deployment, making it an attractive option for more minor simulations that would not merit anything larger, as well as for very major operations where cost, or secrecy, is an issue. (This was true in the planning of OPERATION AI.)
Increasing the level of abstraction still further, simulation moves towards an environment readily recognized by civilian wargamers. This type of simulation can be manual, implying no (or very little) computer involvement, computer-assisted, or fully computerized.
Manual simulations have probably been in use in some form since mankind first went to war. Chess can be regarded as a form of military simulation (although its precise origins are debated).
In more recent times, the forerunner of modern simulations was the Prussian game Kriegsspiel (war game),which appeared around 1811 and is sometimes credited with the Prussian victory in the Franco-Prussian War. It was distributed to each Prussian regiment and they were ordered to play it regularly, prompting a visiting German officer to declare in 1824, “It’s not a game at all! It’s training for war!”
Eventually so many rules sprang up, as each regiment improvised their own variations, two versions came into use. One, known as “rigid Kriegsspiel“, was played by strict adherence to the lengthy rule book. The other, “free Kriegsspiel“, was governed by the decisions of human umpires. Each version had its advantages and disadvantages: rigid Kriegsspiel contained rules covering most situations, and the rules were derived from historical battles where those same situations had occurred, making the simulation verifiable and rooted in observable data, which some later American models discarded. However, its prescriptive nature acted against any impulse of the participants towards free and creative thinking. Conversely, free Kriegsspiel could encourage this type of thinking, as its rules were open to interpretation by umpires and could be adapted during operation. This very interpretation, though, tended to negate the verifiable nature of the simulation, as different umpires might well adjudge the same situation in different ways, especially where there was a lack of historical precedent. In addition, it allowed umpires to weight the outcome, consciously or otherwise.
The above arguments are still cogent in the modern, computer-heavy military simulation environment. There remains a recognised place for umpires as arbiters of a simulation, hence the persistence of manual simulations in war colleges throughout the world. Both computer-assisted and entirely computerised simulations are common as well, with each being used as required by circumstances.
The Rand Corporation is one of the best known designers of Military Simulations for the US Government and Air Force, and one of the pioneers of the Political-Military simulation.
Their SAFE (Strategy and Force Evaluation) simulation is an example of a manual simulation, with one or more teams of up to ten participants being sequestered in separate rooms and their moves being overseen by an independent director and his staff. Such simulations may be conducted over a few days (thus requiring commitment from the participants): an initial scenario (for example, a conflict breaking out in the Persian Gulf) is presented to the players with appropriate historical, political and military background information.
They then have a set amount of time to discuss and formulate a strategy, with input from the directors/umpires (often called Control) as required. Where more than one team is participating, teams may be divided on partisan lines — traditionally Blue and Red are used as designations, with Blue representing the ‘home’ nation and Red the opposition. In this case, the teams will work against each other, their moves and counter-moves being relayed to their opponents by Control, who will also adjudicate on the results of such moves. At set intervals, Control will declare a change in the scenario, usually of a period of days or weeks, and present the evolving situation to the teams based on their reading of how it might develop as a result of the moves made. For example, Blue Team might decide to respond to the Gulf conflict by moving a carrier battle group into the area whilst simultaneously using diplomatic channels to avert hostilities. Red Team, on the other hand, might decide to offer military aid to one side or another, perhaps seeing an opportunity to gain influence in the region and counter Blue’s initiatives. At this point Control could declare a week has now passed, and present an updated scenario to the players: possibly the situation has deteriorated further and Blue must now decide if they wish to pursue the military option, or alternatively tensions might have eased and the onus now lies on Red as to whether to escalate by providing more direct aid to their clients.
Computer-assisted simulations are really just a development of the manual simulation, and again there are different variants on the theme. Sometimes the computer assistance will be nothing more than a database to help umpires keep track of information during a manual simulation. At other times one or other of the teams might be replaced by a computer-simulated opponent (known as an agent or automaton). This can reduce the umpires’ role to interpreter of the data produced by the agent, or obviate the need for an umpire altogether. Most commercial wargames designed to run on computers (such as Blitzkrieg, the Total War series, Civilization games, and even Arma 2) fall into this category.
Where agents replace both human teams, the simulation can become fully computerised and can, with minimal supervision, run by itself. The main advantage of this is the ready accessibility of the simulation — beyond the time required to program and update the computer models, no special requirements are necessary. A fully computerised simulation can run at virtually any time and in almost any location, the only equipment needed being a laptop computer. There is no need to juggle schedules to suit busy participants, acquire suitable facilities and arrange for their use, or obtain security clearances. An additional important advantage is the ability to perform many hundreds or even thousands of iterations in the time that it would take a manual simulation to run once. This means statistical information can be gleaned from such a model; outcomes can be quoted in terms of probabilities, and plans developed accordingly.
Removing the human element entirely means the results of the simulation are only as good as the model itself. Validation thus becomes extremely significant — data must be correct, and must be handled correctly by the model: the modeller’s assumptions (“rules”) must adequately reflect reality, or the results will be nonsense. Various mathematical formulae have been devised over the years to attempt to predict everything from the effect of casualties on morale to the speed of movement of an army in difficult terrain. One of the best known is the Lanchester Square Law formulated by the British engineer Frederick Lanchester in 1914. He expressed the fighting strength of a (then) modern force as proportional to the square of its numerical strength multiplied by the fighting value of its individual units. The Lanchester Law is often known as the attrition model, as it can be applied to show the balance between opposing forces as one side or the other loses numerical strength.
Heuristic or stochastic?
Stochastic simulations are those that involve, at least to some extent, an element of chance.
Most military simulations fall somewhere in between these two definitions, although manual simulations lend themselves more to the heuristic approach and computerised ones to the stochastic.
Manual simulations, as described above, are often run to explore a ‘what if?’ scenario and take place as much to provide the participants with some insight into decision-making processes and crisis management as to provide concrete conclusions. Indeed, such simulations do not even require a conclusion; once a set number of moves has been made and the time allotted has run out, the scenario will finish regardless of whether the original situation has been resolved or not.
Computerised simulations can readily incorporate chance in the form of some sort of randomised element, and can be run many times to provide outcomes in terms of probabilities. In such situations, it sometimes happens that the unusual results are of more interest than the expected ones. For example, if a simulation modelling an invasion of nation A by nation B was put through one hundred iterations to determine the likely depth of penetration into A’s territory by B’s forces after four weeks, an average result could be calculated. Examining those results, it might be found that the average penetration was around fifty kilometres — however, there would also be outlying results on the ends of the probability curve. At one end, it could be that the FEBA is found to have hardly moved at all; at the other, penetration could be hundreds of kilometres instead of tens. The analyst would then examine these outliers to determine why this was the case. In the first instance, it might be found that the computer model’s random number generator had delivered results such that A’s divisional artillery was much more effective than normal. In the second, it might be that the model generated a spell of particularly bad weather that kept A’s air force grounded. This analysis can then be used to make recommendations: perhaps to look at ways in which artillery can be made more effective, or to invest in more all-weather fighter and ground-attack aircraft.
Since Carl von Clausewitz‘ famous declaration war is merely a continuation of Politics by other means, military planners have attempted to integrate political goals with military goals in their planning with varying degrees of commitment. Post World War II, political-military simulation in the West, initially almost exclusively concerned with the rise of the Soviet Union as a superpower, has more recently focused on the global ‘war on terror‘. It became apparent, in order to model an ideologically motivated enemy in general (and asymmetric warfare in particular), political factors had to be taken into account any realistic grand strategic simulation.
This differed markedly with the traditional approach to military simulations. Kriegsspiel (wargame) was concerned only with the movement and engagement of military forces, and subsequent simulations were similarly focused in their approach. Following the Prussian success in 1866 against Austria at Sadowa, the Austrians, French, British, Italians, Japanese and Russians all began to make use of war-gaming as a training tool. The United States was relatively late to adopt the trend, but by 1889 wargaming was firmly embedded in the culture of the U.S. Navy (with the Royal Navy as the projected adversary).
Political-military simulations take a different approach to their purely military counterparts. Since they are largely concerned with policy issues rather than battlefield performance, they tend to be less prescriptive in their operation. However, various mathematical techniques have arisen in an attempt to bring rigor to the modeling process. One of these techniques is known as game theory — a commonly-used method is that of non-zero-sum analysis, in which score tables are drawn up to enable selection of a decision such that a favorable outcome is produced regardless of the opponent’s decision.
It was not until 1954 the first modern political-military simulation appeared (although the Germans had modeled a Polish invasion of Germany in 1929 that could be fairly labeled political-military), and it was the United States that would elevate simulation to a tool of statecraft. The impetus was US concern about the burgeoning nuclear arms race (the Soviet Union exploded its first nuclear weapon in 1949, and by 1955 had developed their first true ‘H’ bomb). A permanent gaming facility was created in The Pentagon and various professional analysts were brought in to run it, including the social scientist Herbert Goldhamer, economist Andrew Marshall and MIT professor Lincoln P. Bloomfield.
Notable US political-military simulations run since World War II include the aforementioned SAFE, STRAW (Strategic Air War) and COW (Cold War). The typical political-military simulation is a manual or computer-assisted heuristic-type model, and many research organizations and think-tanks throughout the world are involved in providing this service to governments. During the Cold War, the Rand Corporation and the Massachusetts Institute of Technology, amongst others, ran simulations for the Pentagon that included modeling the Vietnam War, the fall of the Shah of Iran, the rise of pro-communist regimes in South America, tensions between India, Pakistan and China, and various potential flashpoints in Africa and South-East Asia. Both MIT and Rand remain heavily involved in US military simulation, along with institutions such as Harvard, Stanford, and the National Defense University. Other nations have their equivalent organizations, such as Cranfield Institute‘s Defense Academy (formerly the Royal Military College of Science) in the United Kingdom.
Participants in the Pentagon simulations were sometimes of very high rank, including members of Congress and White House insiders as well as senior military officers. The identity of many of the participants remains secret even today. It is a tradition in US simulations (and those run by many other nations) that participants are guaranteed anonymity. The main reason for this is that occasionally they may take on a role or express an opinion that is at odds with their professional or public stance (for example portraying a fundamentalist terrorist or advocating hawkish military action), and thus could harm their reputation or career if their in-game persona became widely known.
It is also traditional that in-game roles are played by participants of an equivalent rank in real life, although this is not a hard-and-fast rule and often disregarded. Whilst the major purpose of a political-military simulation is to provide insights that can be applied to real-world situations, it is very difficult to point to a particular decision as arising from a certain simulation — especially as the simulations themselves are usually classified for years, and even when released into the public domain are sometimes heavily censored. This is not only due to the unwritten policy of non-attribution, but to avoid disclosing sensitive information to a potential adversary. This has been true within the simulation environment itself as well — former US president Ronald Reagan was a keen visitor to simulations conducted in the 1980s, but as an observer only. An official explained: “No president should ever disclose his hand, not even in a war game.”
Political-military simulations remain in widespread use today: modern simulations are concerned not with a potential war between superpowers, but more with international cooperation, the rise of global terrorism and smaller brushfire conflicts such as those in Kosovo, Bosnia, Sierra Leone and the Sudan. An example is the MNE (Multinational Experiment) series of simulations that have been run from the Ataturk Wargaming, Simulation and Culture Center in Istanbul over recent years. The latest, MNE 4, took place in early 2006. MNE includes participants from Australia, Canada, Finland, France, Germany, Sweden, the United Kingdom, the North Atlantic Treaty Organization (NATO) and the United States, and is designed to explore the use of diplomatic, economic and military power in the global arena.
Simulation and reality
Ideally military simulations should be as realistic as possible — that is, designed in such a way as to provide measurable, repeatable results that can be confirmed by observation of real-world events. This is especially true for simulations that are stochastic in nature, as they are used in a manner that is intended to produce useful, predictive outcomes. Any user of simulations must always bear in mind that they are, however, only an approximation of reality, and hence only as accurate as the model itself.
In the context of simulation, validation is the process of testing a model by supplying it with historical data and comparing its output to the known historical result. If a model can reliably reproduce known results, it is considered to be validated and assumed to be capable of providing predictive outputs (within a reasonable degree of uncertainty).
Developing realistic models has proven to be somewhat easier in naval simulations than on land. One of the pioneers of naval simulations, Fletcher Pratt, designed his “Naval War Game” in the late 1930s, and was able to validate his model almost immediately by applying it to the encounter between the German pocket battleship Admiral Graf Spee and three British cruisers in the Battle of the River Plate off Montevideo in 1939. Rated on thickness of armour and gun power, Graf Spee should have been more than a match for the lighter cruisers, but Pratt’s formula correctly predicted the ensuing British victory.
In contrast, many modern operations research models have proven unable to reproduce historical results when they are validated; the Atlas model, for instance, in 1971 was shown to be incapable of achieving more than a 68% correspondence with historical results. Trevor Dupuy, a prominent American historian and military analyst known for airing often controversial views, has said that “many OR analysts and planners are convinced that neither history nor data from past wars has any relevance.” In Numbers, Predictions, and War, he implies a model that cannot even reproduce a known outcome is little more than a whimsy, with no basis in reality.
Historically, there have even been a few rare occasions where a simulation was validated as it was being carried out. One notable such occurrence was just before the famous Ardennes offensive in World War II, when the Germans attacked allied forces during a period of bad weather in the winter of 1944, hoping to reach the port of Antwerp and force the Allies to sue for peace. According to German General Friedrich J Fangor, the staff of Fifth Panzerarmee had met in November to game defensive strategies against a simulated American attack. They had no sooner begun the exercise than reports began arriving of a strong American attack in the Hűrtgen area — exactly the area they were gaming on their map table. Generalfeldmarschall Walther Model ordered the participants (apart from those commanders whose units were actually under attack) to continue playing, using the messages they were receiving from the front as game moves. For the next few hours simulation and reality ran hand-in-hand: when the officers at the game table decided that the situation warranted commitment of reserves, the commander of the 116th Panzer Division was able to turn from the table and issue as operational orders those moves they had just been gaming. The division was mobilised in the shortest possible time, and the American attack was repulsed.
Validation is a particular issue with political-military simulations, since much of the data produced is subjective. One controversial doctrine that arose from early post-WWII simulations was that of signalling — the idea that by making certain moves, it is possible to send a message to your opponent about your intentions: for example, by conspicuously conducting field exercises near a disputed border, a nation indicates its readiness to respond to any hostile incursions. This was fine in theory, and formed the basis of East-West interaction for much of the cold war, but was also problematic and dogged by criticism. An instance of the doctrine’s shortcomings can be seen in the bombing offensives conducted by the United States during the Vietnam War.
US commanders decided, largely as a result of their Sigma simulations, to carry out a limited bombing campaign against selected industrial targets in North Vietnam. The intention was to signal to the North Vietnamese high command that, whilst the United States was clearly capable of destroying a much greater proportion of their infrastructure, this was in the nature of a warning to scale down involvement in the South ‘or else’. Unfortunately, as an anonymous analyst said of the offensive (which failed in its political aims), “they either didn’t understand, or did understand but didn’t care.” It was pointed out by critics that, since both Red and Blue teams in Sigma were played by Americans — with common language, training, thought processes and background — it was relatively easy for signals sent by one team to be understood by the other. Those signals, however, did not seem to translate well across the cultural divide.
Problems of simulation
Many of the criticisms directed towards military simulations derive from an incorrect application of them as a predictive and analytical tool. The outcome supplied by a model relies to a greater or lesser extent on human interpretation and therefore should not be regarded as providing a ‘gospel’ truth. However, whilst this is generally understood by most game theorists and analysts, it can be tempting for a layman — for example, a politician who needs to present a ‘black and white’ situation to his electorate — to settle on an interpretation that supports his preconceived position.
Tom Clancy, in his novel Red Storm Rising, illustrated this problem when one of his characters, attempting to persuade the Soviet Politburo that the political risks were acceptable as NATO would not be in a position to react in the face of political uncertainty caused by a division of opinion between the Allies, used a political wargame result as evidence the results of a simulation carried out to model just such an event. It is revealed in the text that there were in fact three sets of results from the simulation; a best-, intermediate- and worst-case outcome. The advocate of war chose to present only the best-case outcome, thus distorting the results to support his case.
Although fictional, the above scenario may however have been based on fact. The Japanese extensively wargamed their planned expansion during World War II, but map exercises conducted before the Pacific War were frequently stopped short of a conclusion where Japan was defeated. One often-cited example prior to Midway had the umpires magically resurrecting a Japanese carrier sunk during a map exercise, although Professor Robert Rubel argues in the Naval War College Review their decision was justified in this case given improbable rolls of the dice. Given the historical outcome, it is evident the dice rolls were not so improbable, after all. There were however equally illustrative fundamental problems with other areas of the simulation, mainly relating to a Japanese unwillingness to consider their position should the element of surprise, on which the operation depended, be lost.
Tweaking simulations to make results conform with current political or military thinking is a recurring problem. In US Naval exercises in the 1980s, it was informally understood no high-value units such as aircraft carriers were allowed to be sunk, as naval policy at the time concentrated its tactical interest on such units. The outcome of one of the largest ever NATO exercises, Ocean Venture-81, in which around 300 naval vessels, including two carrier battle groups, were adjudged to have successfully traversed the Atlantic and reached the Norwegian Sea despite the existence of a (real) 380-strong Soviet submarine fleet as well as their (simulated) Red Team opposition, was publicly questioned in Proceedings, the professional journal of the US Naval Institute. The US Navy managed to get the article classified, and it remains secret to this day, but the article’s author and chief analyst of Ocean Venture-81, Lieutenant Commander Dean L. Knuth, has since claimed two Blue aircraft carriers were successfully attacked and sunk by Red forces.
There have been many charges over the years of computerized models, too, being unrealistic and slanted towards a particular outcome. Critics point to the case of military contractors, seeking to sell a weapons system. For obvious reasons of cost, weapons systems (such as an air-to-air missile system for use by fighter aircraft) are extensively modeled on computer. Without testing of their own, a potential buyer must rely to a large extent on the manufacturer’s own model. This might well indicate a very effective system, with a high kill probability (Pk). However, it may be the model was configured to show the weapons system under ideal conditions, and its actual operational effectiveness will be somewhat less than stated. The US Air Force quoted their AIM-9 Sidewinder missile as having a Pk of 0.98 (it will successfully destroy 98% of targets it is fired at). In operational use during the Falklands War in 1982, the British recorded its actual Pk as 0.78.
Another factor that can render a model invalid is human error. One notorious example was the US Air Force’s Advanced Penetration Model, which due to a programming error made US bombers invulnerable to enemy air defences by inadvertently altering their latitude or longitude when checking their location for a missile impact. This had the effect of ‘teleporting‘ the bomber, at the instant of impact, hundreds or even thousands of miles away, causing the missile to miss. Furthermore, this error went unnoticed for a number of years. Other unrealistic models have had battleships consistently steaming at seventy knots (twice their top speed), an entire tank army halted by a border police detachment, and attrition levels 50% higher than the numbers each force began with.
Issues of enemy technical capability and military philosophy will also affect any model used. Whilst a modeller with sufficiently high security clearance and access to the relevant data can expect to create a reasonably accurate picture of his own nation’s military capacity, creating a similarly detailed picture for a potential adversary may be extremely difficult. Military information, from technical specifications of weapons systems to tactical doctrine, is high on the list of any nation’s most closely guarded secrets. However, the difficulty of discovering the unknown, when it is at least known that it exists, seems trivial compared to discovering the unguessed.
As Len Deighton famously pointed out in Spy Story, if the enemy has an unanticipated capability (and he almost always does), it may render tactical and strategic assumptions so much nonsense. By its very nature, it is not possible to predict the direction every new advance in technology will take, and previously undreamt-of weapons systems can come as a nasty shock to the unprepared: the British introduction of the tank during World War I caused panic amongst German soldiers at Cambrai and elsewhere, and the advent of Hitler‘s vengeance weapons, such as the V-1 “flying bomb”, caused deep concern amongst Allied high command.
Human factors have been a constant thorn in the side of the designers of military simulations — whereas political-military simulations are often required by their nature to grapple with what are referred to by modellers as “squishy” problems, purely military models often seem to prefer to concentrate on hard numbers. Whilst a warship can be regarded, from the perspective of a model, as a single entity with known parameters (speed, armour, gun power, and the like), land warfare often depends on the actions of small groups or individual soldiers where training, morale, intelligence, and personalities (leadership) come into play.
For this reason it is more taxing to model — there are many variables that are difficult to formulate. Commercial wargames, both the tabletop and computer variety, often attempt to take these factors into account: in Rome: Total War, for example, units will generally rout from the field rather than stay to fight to the last man. One valid criticism of some military simulations is these nebulous human factors are often ignored (partly because they are so hard to model accurately, and partly because no commander likes to acknowledge men under his command may disobey him). In recognition of this shortcoming, military analysts have in the past turned to civilian wargames as being more rigorous, or at least more realistic, in their approach to warfare. In the United States, James F. Dunnigan, a prominent student of warfare and founder of the commercial tabletop wargames publisher Simulations Publications Incorporated (SPI, now defunct), was brought into the Pentagon’s wargaming circle in 1980 to work with Rand and Science Applications Incorporated (SAI) on the development of a more realistic model. The result, known as SAS (Strategic Analysis Simulation), is still being used.
The human factors problem was an essential element in the development of Jeremiah at Lawrence Livermore National Laboratory in the 1980s. Research by Lulejian and Associates had indicated that the individual soldier’s assessment of his probability of survival was the key metric in understanding why and when combat units became ineffective. While their research was based on day to day time scales, the developer of Jeremiah, K E Froeschner, applied the principle to the 10 second time step of the computer simulation. The result was a high degree of correlation with measured actions for which detailed data were available from a very few after action reports from WWII, the Israeli tank action on the Golan Heights as well as live exercises conducted at Hunter liggett Military Reservation in Monterey, California.
Jeremiah was subsequently developed into Janus by other researchers and the ‘Jeremiah Algorithm’ deleted for reasons of economy (Janus ran initially on a small computer) and for the reasons cited above — some in the military (mostly lower ranks) did not like the idea of orders not being obeyed. However the Generals who witnessed Jeremiah and the algorithm in action were usually favourable and recognized the validity of the approach.
All the above means that models of warfare should be taken for no more than they are: a non-prescriptive attempt to inform the decision-making process. The dangers of treating military simulation as gospel are illustrated in an anecdote circulated at the end of the Vietnam War, which was intensively gamed between 1964 and 1969 (with even President Lyndon Johnson being photographed standing over a wargaming sand table at the time of Khe Sanh) in a series of simulations codenamed Sigma. The period was one of great belief in the value of military simulations, riding on the back of the proven success of operations research (or OR) during World War II and the growing power of computers in handling large amounts of data.
The story concerned a fictional aide in Richard Nixon‘s administration, who, when Nixon took over government in 1969, fed all the data held by the US pertaining to both nations into a computer model — population, gross national product, relative military strength, manufacturing capacity, numbers of tanks, aircraft and the like. The aide then asked the question of the model, “When will we win?” Apparently the computer replied, “You won in 1964!”
- Military exercise
- Modeling and simulation
- Military Operations Research Society (MORS)
- Operations Research
- The 20th Century’s First Strategic War Game
- (For those who want to find secular inspiration rather than to join a fight for rights and reason, young photographer Chris Johnson has created a coffee table book that challenges readers to grab hold of this one precious life: A Better Life—100 Atheists Speak Out About Joy and Meaning in a World Without God. The title says it all. Valerie Tarico is a psychologist and writer in Seattle, Washington and the founder of Wisdom Commons. She is the author of “Trusting Doubt: A Former Evangelical Looks at Old Beliefs in a New Light” and “Deas and Other Imaginings.” Her articles can be found at Awaypoint.Wordpress.com.)
- Origins of the Kriegsspiel
- Von Reisswitz’s Original Equipment
- Kriegsspiel Bibiography
- British Army Kriegsspiel pieces, 1885
- Virtual military
- Glossary of military modeling and simulation
- SPAWAR | lisa’s leaks
- Space And Naval Warfare Systems Command (SPAWAR)
- SSC Pacific – SPAWAR Careers – U.S. Navy
- The Navy’s Information Dominance Systems Command
- SPAWAR | Halldale
- The RAND Military Operations Simulation Facility: An …
- Lecture 20: Undecidable and P-Complete | Lecture Videos …ES.268 Constraint logic, Slides 12
- War Gaming | RAND – RAND Corporation
- Military simulation
- Army, Navy, Air Force Journal & Register
- wargame – Reddit
- Chiltern Kriegsspiel Circle – Homestead