Wednesday, August 21, 2013

Seven Secrets To Creating A Cohesive Team

We have all been on a team with people we dislike; those who do not pull their weight, those who never show up to team meetings and those who simply refuse to delegate. 

What, then, is the equation for the perfect team; the recipe that most homogeneously combines talent, direction, innovation and leadership such that what comes out of the oven is cohesive, inspirational and fully baked?

The best, single, all around resource for this kind of advice is, of course, Richard Hackman's Collaborative Intelligence (One of the top 5 books every intel professional should read but has probably never heard of).  Beyond Hackman is the kind of research developed by the University of Melbourne's Dr. Fiona Fidler  and recently presented by George Mason's Dr. Charles Twardy at the Global Intelligence Forum on how to make decisions in groups.  

Beyond these authoritative sources, there is lots of other advice and research within the flourishing domain of the blogosphere - some good, some bad.  Evenly balanced between team composition (how the team is assembled) and team action (how the team interacts), the following seven broad guidelines represent a quick overview of the current expert opinion on achieving the ideal team chemistry:

Team composition:

1. Social skills. 

Both SciBlog's 10 Keys to Building Great Teams and's follow-on 3 Scientific Secrets to Great Team Chemistry agree that balancing the social skills of a group is a key element of team success. The 2010 ScienceMag article features an interesting study Evidence for a Collective Intelligence Factor in the Performance of Human Groups in which researchers found that it is not the average intelligence of group members that most closely correlates with group performance, but social skills. Within a group dynamic, social skills influence group communication, tasking, constructive criticism, feedback and flexibility, so it is really no wonder it is a better predictor of group success.
The study tested 699 participants over the course of two experiments with the ultimate conclusion that a) Group collective intelligence is a phenomenon that exists (which the researchers call "c") and b) The average intelligence of group members is not significantly correlated with c (r = 0.19, ns). When working individually, intelligence is a predictor of success (r = 0.33, p<0.01); within a group dynamic, however, c turned out to be a better predictor of group success that individual intelligence. More telling still is that statistically significant correlates of c were found to be "social sensitivity" (r = 0.26, p<0.01), more participation and conversational turn-taking and the number of "females in the group" (r = .23, p<0.01). In other words, social skills are a better predictor of team success than intelligence.
2. Gender ratios.
This final finding from the ScienceMag article ties team-building recommendation #2 back to the first. Females conclusively score higher on social skills assessments than males, signifying that the easiest way to improve the collective social skills ability of a team is to have an equal balance of male and female team members. Several studies also find that mixed gender teams tend to outperform all-male or all-female teams (such as this 2011 study and this report from Credit Suisse).
The Harvard Business Review blog caveats social skills by indicating that "emotional intelligence" (EI) is "the biggest predictor of team success" (citing Emotional Intelligence and its Role in Collaboration). Their three-step program to ensure maximum EI within a team involves a) Becoming aware of each fellow team members' skills, b) Establishing structured ways to disagree and c) Taking the time to celebrate success. 
3. Interdisciplinary approach. 
The first of the three-step process for promoting a team's emotional intelligence touches on an interesting point: Becoming aware of and capitalizing on team members' skills. While compiling a multi-disciplinary team creates challenges, it also provides distinct advantages. Team members that come from different backgrounds - Computer science, Linguistics, Intelligence, Anthropology, Sociology, Geography, etc. - come with different perspectives on issues and different approaches to problems. The multidimensionality this adds to the end product is indispensable. 
4. Conceptual alignment.
An interdisciplinary team, however, adds complications in terms of getting everyone on the same page. With everyone viewing the problem from a different vantage point, there will likely be discrepancies in team members' mental  representations of the problem and what I call 'the lexicon polemic', i.e different names for the same concept between disciplines.
Note:  Bracken and Oughton do a good job of explaining the three ways in which language is important to interdisciplinary research.  
Both SciBlog and propose to resolve this by "sharing the story," or combining the mental models of all team members. A glance over to pedagogical practice provides a technique ideal for achieving this.
Nominal Group Technique (NGT) is a strategy that works as well in a classroom as it does as it does in a professional environment. Literature on teaching strategy has long since concluded that individual brainstorming is more effective than group brainstorming. 
NGT is an approach that requires team members to brainstorm solutions to a problem on an individual basis, record their solutions, then combine all solutions and group them according to similarity (the most salient solutions will, in effect, appear more frequently).
This achieves two main purposes: 1. It aligns the group conceptually by revealing the overlap of key concepts between and among disciplines and 2. It aligns the group linguistically (resolving the lexicon polemic) by revealing the overlap of key vocabulary between and among disciplines.
For example anchoring bias sounds a lot like what linguists call "semantic priming".  This technique could help a group of linguists and analysts realize that they are actually talking about the same phenomenon but using different words.
Team action:

5. Define goals.
This seems like a fairly obvious step for a team to take (sometimes so obvious that each member assumes a mutual understanding of the goal and it is never explicitly discussed). I think it goes without saying that clearly stating what the end result of team collaboration should be is of inarguable importance to success. Beyond a mutual understanding of the goal among team members, Shteynberg and Galinsky show that "sharing intentionality leads to implicit coordination," meaning that those who explicitly share goals with others are more likely to instinctively act in the same way (actions that invariably trend towards achievement of the shared objective). 
6. Define roles.
Equally as important as defining the goal of the team, according to SciBlog's article, is specifying how it intends to get there (who is responsible for what, when and how). Clearly defining roles, expected contributions and individual deadlines for team members keeps the team on track and reduces collaborative ambiguity. Project management tools such as DropTask (which we have recently been using) can greatly assist in the process of tasking individual team members and managing deadlines (some are better than others, more professionally-focused, more academically-focused, more focused on budget management, more focused on timeline management, more visual). 
7. Communication.
Finally, though seemingly the most hackneyed piece of advice for team building, project management and leadership within both professional and academic domains, communication is key! The comprehensive article Teamwork: Emerging Principles, highlights this facet of collaboration among many others. The best advice for effective communication is defining your communication space and contact points, whatever those may be. Weekly in-person meetings, an e-mail group where everyone gets CCed, a project management tool, Google Docs... it doesn't matter. Define your space and make sure everyone on the team knows how to communicate everything from criticism to congratulations.

Wednesday, August 14, 2013

NOW AVAILABLE: The Ancient Viking Game Every Intelligence Professional Should Play

Panel from the comic, Cthulhu Vs The Vikings 
A couple of weeks ago I posted an article about the ancient Viking game, Hnefatafl, along with some thoughts about why I thought it was a good game for intel professionals to play. 

A lot - and I mean a lot - has happened since then.

The most important thing (at least to me) is that I have developed a new version of the game that is now for sale.  It is called Cthulhu vs. The Vikings and is currently available on Kickstarter.  The backstory to the game, which is told in the form of a comic, mashes-up the Viking sagas with the Cthulhu stories from H.P. Lovecraft (a horror writer from the 1920's).

While the game itself also plays on those themes in terms of the design work (in the board and the pieces), the rules are straight Hnefatafl.  In fact, I got permission from the Fetlar Hnefatafl Panel, which sponsors the Hnefatafl World Championships, to use their rules (Note:  The Hnefatafl World Championships were held AUG 3 in Fetlar Scotland and Amanda Caukwell is the new World Champion!).

Bottomline:  If you are looking for an attractive and affordable copy of the ancient Viking game, Hnefatafl, you can now find one here.

The blog post also got picked up by the radio show, The World, produced jointly by the BBC, Public Radio International and Boston's WGBH.  They interviewed me about the game and about its importance to intelligence professionals.  You can listen to the interview below:

Finally, the game and its relationship to intel also got a little local press and a lot of interest from the readers of this blog (Thanks for the emails and kind words)! 

Monday, August 12, 2013

Game-based Learning And Intelligence Analysis: Current Trends And Future Prospects (Article Summary)

(Eds. Note:  On August 7th, we published an article in e-International Relations (an excellent resource), reviewing the utility of games-based learning in the intelligence classroom. The article is summarized below, but you can read the full article here.


Games like The Mind's Lie were designed
to teach key skills to intelligence analysts.
Eyes glazed, texting, fidgeting in their seats.  For all too many educators, this is an increasingly common sight in classrooms.  One promising solution to this problem, at least with respect to intelligence analysis, comes from the growing body of research into game-based learning.

One of the most useful skills games teach students is the ability to identify deep patterns in disparate sets of data, a skill which consequently addresses precisely the kind of strategic and metacognitive thinking most relevant to the work of an intelligence analyst. So how do games achieve this?

The article asserts that pedagogical strategies such as peer-learning, implicit learning and practice-at-recall are all active in a games-based approach. These strategies have wide support from the academic community addressing pedagogy, and a more recent body of research to emerge from this very domain finds that “extensive experience with music or video games is associated with enhanced implicit learning of sequential regularities” (Bergstrom et al 2011). Findings such as these in the academic literature promote the article’s main point that games teach pattern recognition implicitly.

The only way to achieve this, though, is by playing games, and playing a lot of them. A challenge to taking a games-based approach in the classroom is that there is no one game that a) teaches everything necessary to convey in a course and b) fits the learning style (or gaming style, as it were) of every student. For this reason, students shouldn’t play just one game, they should play many games. The article references another caveat to the games-based approach presented in Kris Wheaton’s 2011 paper Teaching StrategicIntelligence Through Games; that though student success in course projects increased over time with the implementation of a games-based approach, student satisfaction with the course decreased. The article elaborates on potential explanations for this phenomenon.

Attention is the currency of learning and the standard lecture format is not long for this world. The games-based provides a possible solution as an innovative approach to imparting knowledge. 

Wednesday, August 7, 2013

Is Forensic Speaker Recognition The Next "Fingerprint?"

Take a fingerprint... for that matter, go ahead and take a palm print. Now, take a voiceprint. In this day and age, forensic biometric analysis is extraordinarily complex. In a world where we analyze everything from irises to earlobes, what can science tell us about voice?

One increasingly popular form of analysis is forensic speaker recognition (aka voice biometrics or biometric acoustics). Forensic speaker recognition (FSR) has unequivocal potential as a supplementary analytic methodology, with applications in both the fields of law enforcement and counterterrorism (for details, see the last section of the 2012 book on FSR Applications to Law Enforcement and Counter-terrorism).

The utility of the FSR process is either one of identification (1:N or N:1) or verification (1:1).

  • 1:N Identification -- Imagine you have a recording of a voice making threats over the phone. The speaker identification process allows you to query a database of acoustic recordings of known suspects for comparison against your target voice to identify more threats he/she might have made.
  • N:1 Identification -- Imagine you have a bunch of voice recordings and you want to know in which of them, if any, a certain speaker participates. 
  • 1:1 Verification -- Imagine you wish to grant someone access to a building or secure location by assessing whether or not they are who they say they are (this aspect of speaker recognition is less applicable to analysis and more applicable to security). 
That said, the CIA, the NSA and the Swiss IDIAP all turned to automatic speaker verification systems in 2003 to analyze the so-called Osama tapes (for details of the approach, see Graphing the Voice of Terror). This case provides an excellent opportunity to note the distinction between automatic speaker recognition performed by an algorithmic machine and aural speaker recognition performed by acoustic experts. 

The cornerstone methodology supporting forensic speaker recognition is voiceprint analysis,or spectrographic analysis, a process that visually displays the acoustic signal of a voice as a function of time (seconds or milliseconds) and frequency (hertz) such that all components are visible (formants, harmonics, fundamental frequency, etc.).
(Note:  For those who are more acoustically inclined and would enjoy a well-written read on all things acoustic from military strategy to frog communication, Seth Horowitz's new book The Universal Sense: How Hearing Shapes the Mind comes with my highest recommendation.)
Spectrographic analysis differs from human speaker recognition in that it provides a more quantifiable comparison between two speech signals. Under favorable conditions, both approaches yield favorable results: 85 percent identification accuracy (McGehee 1937), 96 percent accuracy (Epsy-Wilson 2006), 98 percent accuracy (Clifford 1980), 100 percent accuracy (Bricker and Pruzansky 1966). These approaches, however, do not come without caveats.

Forensic speaker recognition has many limitations and is currently inadmissible in federal court as expert testimony. Bonastre et al (2003) summarize these limitations quite well:  
"The term voiceprint gives the false impression that voice has characteristics that are as unique and reliable as fingerprints... this is absolutely not the case."
The thing about voices is that they are susceptible to a myriad of external factors such as psychological/emotional state, age, health, weather... the list goes on. From an application standpoint, the most prominent of these factors is intentional vocal disguise. There are a number of things people can intentionally do to their voices to drastically reduce the ability of machine or human expert to identify their voice correctly (you would be amazed at how difficult it is - nearly impossible - to identify a whispered voice). Under these conditions, identification accuracy falls to 40 - 52 percent (Thompson 1987), 36 percent (Andruski 2007), 26 percent (Clifford 1980). 
Top: Osama bin Laden's "dirty" 2003 telephonic spectrogram
Bottom: Osama bin Laden's "clean" spectrogram
Source: Owl Investigations

More problematic still is communication by telephone. Much of the input law enforcement and national security analysts have to work with comes from telephone wiretaps or calls made from jail cells. Telephones, cellphones in particular, create a filtering phenomenon of an acoustic signal, whereby all acoustic information under a certain frequency simply does not get transmitted (within this frequency range lie some of the key characteristics for voice identification). 

While the forensic speaker recognition capability has come a long way since 2003, the consensus among the analytic community remains that it is not a stand-alone methodology, rather a promising supplementary tool. Biometric analysis was also a topic brought to the Intelligence Technology panel of the 2013 Global Intelligence Forum conference this year. Of note was the expanding applicability and increasing capabilities of all biometric technologies. 

Thus far, the Spanish Guardia Civil is the only law enforcement agency worldwide to have a fully-operational acoustic biometric system (called SAIVOX, the Automatic System for the Identification of Voices). In the Spanish booking process, just like we take fingerprints, they take voice samples that they then contribute to a corpus of over 3,500 samples linked with well-known criminals and certain types of crime. 

In 2011, the FBI commissioned NIST to launch a program on "investigatory voice biometrics." The goal of the committee is to develop best practices and collection standards to launch an operational voice biometric system with robust enough corpora so as to serve as a useful tool in ongoing investigations, modeled off the Spanish system. (This is an ongoing project and you can read the full report here).

FSR is not a perfect methodology, but one that can add substantial value on a case-by-case basis. It is of high interest to the US national security and law enforcement analytic communities.

Additional reading:
Andruski, J., Brugnone, N., & Meyers, A. (2007). Identifying disguised voices through speakers' vocal pitches and formants. 153rd ASA meeting.
Bonastre, J. F., Bimbot, F., Boe, L. J., Campbell, J. P., Reynolds, D. A., & Magrin-Chagnolleau, I. (2003). Person authentication by voice: A need for caution. Eurospeech 2003.
Bricker, P.D., & Pruzansky, S. (1966). Effects of stimulus content and duration ontalk identification. The acoustical society of the Americas, 40, 1441-1449.
Clifford, B. R. (1980). Voice identification by human listeners: On earwitnessreliability. Law and human behavior, 4(4), 373-394.
Epsy-Wilson, C. Y., Manocha, S., & Vishnubhotla, S. (2006). A new set of features fortext-independent speaker identification.
McGehee, F. (1937). The reliability of the identification of the human voice. Journal of general psychology, 31, 53-65.
Parmar, P. (2012). Voice fingerprinting: Avery important tool against crime. J Indian academy forensic med.,34(1), 70-73. doi: 0971-0973