On March 13, 1964, Catherine "Kitty" Genovese is murdered in Queens, New York. Her assailant attacks her 30 meters from her apartment door and stabs her in the back two times. Kitty screams for help and that she has been stabbed, but to no avail. One neighbour shouts to “let that girl alone”. The attacker flees from the scene, only to return ten minutes later. He searches for his wounded victim until he finds her, barely conscious, in front of the locked door of her apartment building. He stabs her several additional times, rapes the fatally wounded woman and steals $49 from her purse. The attack lasts for half an hour. Investigations by the police reveal that approximately a dozen neighbours heard or observed parts of the attacks, but nobody took action.

Two weeks after the attacks, the New York Times publishes an article with the headline “37 Who Saw Murder Didn’t Call the Police”. The public responds with an outcry, shocked by the apathy and callousness of the witnesses. Though exaggerated, the story rises to fame and ignites a fundamental discussion on the pervasive failure to help those in critical need.

Bibb Latané and John Darley term this issue the bystander effect. While we would assume that an increase in witnesses also increases the likelihood of help, in fact the opposite is the case. Individuals within a crowd experience a diffusion of responsibility, expecting “the others” to take actions. As each individual focuses on the group behaviour as a reference, particularly in ambiguous situations, none act on their own. This issue is further facilitated by increased anonymity in a crowd, which drastically reduces perceived moral obligations to intervene. The awful irony is thus, that the more people are present in a critical situation, the less we should expect to get help.

One solution to this problem is to turn the focus back on the individual. Addressing bystanders as individuals (“Hey, you with the black shirt!”) and requesting concrete help (“Call the police!”) brings back a sense of individual responsibility and diminishes anonymity.

One of the most essential aspect of human civilisation is the formation of groups. Yes, you are absolutely right, you totally could beat up that mammoth all by yourself. But wouldn’t it be easier if you were part of a team?
Groups are with no doubt the foundation of our cultural and technological rise. the inevitable consequence of group formation, however, is a distinction into in- and outgroups - groups you belong to and groups you are not a part of. As group affiliation provided essential information on whom to trust and who was most likely to reciprocate assistance, group distinction provided a valuable evolutionary advantage in survival. Nowadays, groups form for a wide variety of reasons, as sports clubs, political parties, those who love star wars and those who are wrong, or peer groups.

Groups give us a sense of identity, they provide safety, support and appreciation. But they do not come without drawbacks. Ingroup favouritism describes a preferred treatment and evaluation of ingroups compared to outgroups and does not even require a reason: The mere categorisation into a group evokes a preference towards it. Contact to an ingroup increases perceived similarity to oneself, and dissimilarity to other groups. Actions of outgroups are attributed to personal factors (“it’s because they’re a bad person”), whereas actions of ingroups are attributed to circumstances (“they had no choice, it was raining”). Perceiving resources to be finite increases intergroup-competition, and outgroups can be considered a threat to personal goals and gains. This in part can explain why some people have reservations regarding refugees, Muslims, homosexuals or others who are perceived as different to oneself and one’s ingroup. But it is not an excuse. It simply highlights how deeply rooted this framework is in our minds, and how important it is to find ways for us to free from these innate distinctions.

You are assigned to a team of ten to develop a new system for.. Frankly, you don’t even know what it’s supposed to do. All you know is that it will take weeks of tedious efforts and most likely will never be implemented anyway. Nobody in your group has a particular task, you are just supposed to throw something together and hand it in it as a group-effort. The conscientious person you are, you go and make yourself a coffee, open up some funny cat videos on your phone and lean back. Someone will surely come up with something presentable.

Social loafing is the tendency to work less when in a group than when working alone. It is the result of several factors that are often ignored when designing tasks. As usual, an increase in group size also increases the diffusion of responsibility. It also means that the individual effort most likely will neither be essential nor noticed. If individual contributions are not considered in the grand scheme of things, why would the individual care to work hard? The result is a large group of individuals who each rely on “the others” to fix up the task, and a tremendous decrease in productivity.
The key to maximising group efforts is thus to 1. keep groups small, 2. make tasks meaningful and 3. individual contributions identifiable and accountable, and 4. convey that individual efforts matter.

The Köhler effect describes the opposite of social loafing: An individual works harder in a group than alone. It is based on 1. an upwards comparison to group members that perform better than the individual, and 2. the dependency of the group on each member. A classic example describes a mountain-climbing team connected by a rope. The team can only climb as fast as their least proficient member. The weakest member, aware of this, will try harder than they might when climbing alone. The effect is increased if the group can monitor individual efforts and works in close proximity. It also increases when the superior comparison is a member of an outgroup. Interestingly, a less-capable man will work harder when teamed with a superior woman than with a superior man.

In 1971, Philip Zimbardo conducts an experiment on depersonalisation and deindividuation at Stanford university. 24 psychologically stable and healthy middle class men are assigned to take on the roles of prisoner or guard over the course of 2 weeks in a fake prison in the psychology building’s basement.
On the second day, the prisoners revolt. The guards subdue them with fire extinguishers and punish them by taking away their beds, clothes and access to toilets. Prisoners have to defecate and urinate in buckets, which soon makes the entire basement smell of faeces. After three days, one prisoner exhibits such extreme stress reactions that he has to be released. One third of the guards exhibit genuine sadistic tendencies, especially at night when they assumed the security cameras to be offline. Four prisoners suffer from a nervous breakdown, one exhibits severe psychosomatic stress symptoms, the others become fully submissive to the guards to avoid punishment. The now famous Stanford Prison experiment is aborted after only six days.

Deindividuation describes the loss of self awareness and individual accountability in a group. It is the result of individuals within a group creating a collective mind that replaces the individual, shifts attention towards the group, and decreases the salience of personal identity. Through this reduced self-awareness, individuals no longer observe and evaluate themselves and neither act in line with their personal beliefs and values but base their actions on the group imperative. This is facilitated, again, by anonymity, which causes a diffusion of responsibility, less concern with evaluation by others, and less internal inhibition. Behaviour can become more impulsive, emotional, irrational and even anti-social. For this very reason, armies wear uniforms. They facilitate deindividuation by stripping away personality and attached norms and values, while simultaneously increasing group identification and anonymity within the group, making it easier to follow orders and less likely to question actions.

On the 15th of April 1961, eight CIA-bombers set out to destroy the entire Cuban Airforce. On the 17 of April, 1500 CIA-funded and trained counter-revolutionary cuban-exile paramilitaries invade the Bay of Pigs to spark a revolution and overthrow the increasingly communist government of Fidel Castro. They are greeted by a militia of over 200.000 men. On the 20th of April, all paramilitaries are either dead or captured. Following the coup, Cuba strengthens their ties to the Soviet Union, culminating in the Cuban Missile Crisis that almost annihilated the entire human civilisation as we know it. If Hannibal Smith loved it when a plan comes together, this would be his worst nightmare.

The Bay of Pigs fiasco is the result of an interplay of both group think and Abilene paradox.
The term groupthink describes a psychological phenomenon within a group that strives for conformity and consensus at all costs and thus results in irrational and oftentimes terrible decision making. Flaws, alternatives, criticism and conflict are suppressed and outside influence blocked.

Consequently, the group believes to be in total agreement, morally and rationally right in their actions and invulnerable in their doing. The group even establishes so-called mindguards that protect group and leader from opposing information. As no arguments against arise, the group heads right into their doom.
The Abilene paradox takes this issue a step further. It describes a situation in which each member opposed what they believe to be the group consensus but nobody speaks up and thus the false consensus remains in place.
Much information indicated that the CIA operation was known to be doomed to fail and several officials considered the approach dangerous. Yet, no critical evidence was considered, instead the group considered the plan infallible and proceeded, infatuated by a sense of false invulnerability. Following the incident, President Kennedy changed the entire decision-making process of his staff. Informal settings, devil’s advocates and emphasis on counter-arguments were established and criticism explicitly demanded.

You participate in a study on memory and learning at the Yale University. On your arrival, you are randomly assigned to the role of a teacher, another participant becomes the learner. Your task is to teach a list of word pairs to the learner and subsequently test if they can remember the pairs correctly. If the answer is right, you proceed with the next pair, if the answer is wrong, you are to administer an electric shock to the learner. The voltage increases by 15 volt with each wrong answer. The learner is placed in a different room, but after a number of voltage-level increases, you start to hear sounds. He bangs against the wall, complains about his heart condition, and ultimately screams of pain. You ask to stop the experiment, but the experimenter tells you to continue. He assures you that you will not be held responsible. He says that the experiment requires that you continue. That it is essential that you continue. That you have no choice but to continue. The volts rise with every mistake. On the display, 300 volts is labeled “Danger!”. You continue. The maximum, 450 volts, is labeled “XXX”. You continue.

Prior to the experiment, a survey asks Yale psychology majors and teachers to predict the outcome of the experiment. The respondents believe only a small fraction of teachers to inflict the maximum voltage.
The results show otherwise. 2/3 of the participants administered a potentially fatal electric shock when urged to do so.

The Milgram experiment investigated the willingness of participants to perform actions that are not reconcilable with their conscience but are encouraged by authority figures. The experiment was set to find an explanation for the crimes against humanity in Nazi Germany. The learner was an actor who was not harmed in any way.
To the researcher’s surprise, most participants were willing to obey orders, even when they apparently caused serious pain and peril to the learner.
The controversial findings suggest that obedience to authority is deeply embedded in all of us and can cause ordinary people to commit actions they would firmly reject under different circumstances.

What creates intergroup conflict? How does prejudice develop? Muzafer Sherif proposed the essence of group conflict to be a competition for limited resources. To test his theory, he went to summer camp.

In the now famous Robber’s Cave study, 22 twelve years old boys were randomly assigned to one of two groups and spent their time separately on two campsites. Their group-identity was strengthened and they developed their own group norms and activities and gave their group a name - the Rattlers and the Eagles.
Next, the existence of the other group was revealed to each group respectively. They competed with each other in several activities and received prices if they succeeded. The loser got nothing.
The conflict between the Rattlers and Eagles increased more and more. Verbal threats were followed by burning the Rattler’s flag, which was answered by ravaging the Eagle’s cabin. The well-adjusted middle-class boys had established firm in- and outgroups and faced each other with hostile rivalry.

It is easy to discard the boy’s quarrel as child’s play. Yet, much of our world is built on the premise that one man’s loss is another man’s gain. The realistic group conflict theory proposes that resources such as money, political power or social status can be seen as limited and groups will compete over their acquisition. Depending on the value and shortage, the conflict may escalate tremendously.

A solution is provided by Gordon Allport. The Contact hypothesis describes how to effectively improve intergroup-relations following conflict and prejudice. The hypothesis states that the interaction with the opposite group and effective communication enables both groups to understand and appreciate each other’s point of view and thus resolve conflict. Key elements for this interaction to be successful are:
1. Equal status of both groups
2. Common superordinate goals
3. Cooperation between both groups
4. Approval of authorities
5. Personal interaction