Cultural exchange


Natar Ungalaaq stars in The Necessities of Life (Ce Qu’il Faut Pour Vivre) as an Inuit hunter forced by illness to a Quebec City sanatorium.

The Necessities of Life (Ce Qu’il Faut Pour Vivre) is a fish-out-of-water drama about an Inuit hunter forced by illness to move into a Quebec City sanatorium during the tuberculosis epidemic of the 1950s. Separated from his family and culture for the first time, in an alien place where he cannot speak or understand the language, Tivii loses the will to live. His sympathetic nurse, Carole, arranges for a young Inuit boy named Kaki, to be transferred to his sanatorium.

Kaki, who also speaks French, offers his elder companionship and a means to communicate while Tivii takes a paternalistic interest in renewing Kaki’s connection with traditional Inuit culture. Tivii rediscovers his pride and energy and the bond between the two hospital patients grows stronger.

The film, opening March 13 at Fifth Avenue Cinemas, received eight genie award nominations and was Canada’s submission for the 2009 foreign language Oscar. Critics have praised its sensitive handling of emotional life and the absorbing central performance by Natar Ungalaaq (star ofAtanarjuat: The Fast Runner), while Benoît Pilon, a director crossing over from documentary to make this debut feature film, provides a steady hand at the helm.

Cultural exchange is the name of the game at the Ozflix: Australian Film Weekend, a four-day showcase of films from Down Under at the Pacific Cinémathèque (

Among them is mid-teen, coming-of-age drama Black Balloon. It follows Thomas (Rhys Wakefield) who is desperate to fit in and meet girls at his new school in Sydney, but who suffers embarrassment about his autistic brother Charlie. A budding romance with attractive and spirited Jackie (Gemma Ward), who is in his swimming class, helps Thomas learn about acceptance and worth. As a slice of life in a crazy, loving family, it’s a slight film, but enjoyable thanks especially to excellent performances by Toni Collette as the devoted, workaholic pregnant mum and Erik Thomson as the military dad who takes advice from a teddy bear. The pretty stars look older than their parts, but this has the authentic feel of someone’s personal story.

Among Ozflix’s crop of shorts, animation and features, there’s a double-bill screening of two parts of the documentary series Great Australian Albums. I watched Nick Cave and the Bad Seeds – Murder Ballads, the 1996 album that started life as a joke (an entire album of ballads about murder), but went on to become the band’s biggest, commercial success.

As someone who has acquired a taste for Nick Cave’s brooding, gothic lyricism over the years, I found this hugely enjoyable. The creative process is well documented – amazingly, the band still records live performances in the studio on tape – and the mix with archive footage going back to Cave’s punk roots decades ago is done well. In interview, Cave comes across as suave, wry and characteristically dark.

Persuading pop princess Kylie Minogue to duet with him on surprise hitWhere the Wild Roses Grow was not as difficult as one might think even though Cave admits the lyrics were “seriously creepy… with a capital ‘K’”. Interesting to learn that his simmering music video with songstress PJ Harvey on Henry Lee was done in one take. After its 52 minutes, I wanted to get the album. It screens with sunny, indie pop success of the eighties,The Go-Betweens – 16 Lovers Lane (15th, 5pm).

Finally, Michael McGowan’s One Week is a road trip movie about a young man (Joshua Jackson) who, when diagnosed with cancer, decides to ride a vintage motorcycle from Toronto to Tofino, BC. It’s described as “an ode to the Canadian landscape” with a soundtrack that includes Sam Roberts, Stars and Patrick Watson. 


Robert Alstead maintains a blog at

Latest Palme winner a class act


Scene from The Class

Opening this month, Laurent Cantet’s French language feature The Class (Entre Les Murs) won the Palme d’Or, the top prize, at the Cannes Film Festival this past summer. The film is based on teacher François Bégaudeau’s 2006 novel about his experiences at a junior high school in a tough Paris neighbourhood and stars the author himself as maverick French-language teacher François Marin.

Palme d’Or winners typically have a strong socio-political commentary, although treatments vary widely, including Michael Moore’s documentaryFahrenheit 9/11 (2004) with its entertaining invective and the aching, angst-ridden existentialism of the Dardenne brothers, two-time winners withRosetta (1999) and L’Enfant (2005).

While The Class falls more into the latter category, it has a straightforward, lighter touch than other moody works of the Belgian auteurs. Considering the potential for tragedy and strife in its study of a class of 13-15-year-olds from deprived, multicultural Paris, it’s surprisingly lively with its verbal sparring matches between the teacher and his troublesome pupils.

All the action takes place within the school and mostly within the classroom itself. Although it’s a fictional piece, there’s a documentary realism to it; think handheld, fly-on-the-wall shots and a flood of dialogue. You would be forgiven for initially thinking that you are following a slick TV crew on an assignment rather than watching a work of fiction.

The film was loosely scripted, with students improvising dialogue. Three high-definition cameras captured the action and you’d never guess from the quality of the performances that the 24 teen actors were drawn from a tiny pool of 50 students from inner-city Parisian schools.

The narrative structure is necessarily loose – a teacher arrives and starts teaching – but it draws you in and then hooks you with a dramatic plot twist towards the end. François pushes, goads, encourages and teases his students and allows them to dish it back. This works most of the time and even his most difficult students, like the surly Malian Souleymane, start responding to his approach. As long as he can maintain the delicate balancing act of disciplined decorousness with free-flowing interaction, he appears to get results, stimulating discussion and interaction.

But it’s never easy and as external strains begin to take their toll, his methods are questioned in the staff common room. Ultimately, he crosses a line that undermines his authority with his students. Unlike some more gooey films of this genre, the story remains credible to the end, but it is the subtle changes in the way power is wielded between the four walls that makes this such an interesting film.

Also out this month is Steven Soderbergh’s two-part biopic Che (30), starring Benicio del Toro as iconic Ernesto “Che” Guevara. In part one, The Argentine, he sets sail for Cuba in 1956 with Fidel Castro and 80 rebels to overthrow the corrupt dictatorship of Fulgencio Batista. The film follows Che’s rise from doctor to commander to revolutionary hero.

Part 2, Guerilla, starts at the height of Che’s fame following the Cuban Revolution. He emerges incognito in Bolivia leading a small group of Cuban comrades and Bolivian recruits in the great Latin American Revolution. However, for all the will in the world, his campaign is doomed. The almost five- hour-long film has been praised for Benicio del Toro’s performance, although critics are still arguing over whether Soderbergh’s portrait of Che is too dispassionate and uncritical.

Finally, the Vancouver International Mountain Film Festival will show a string of movies and multimedia presentations on the theme of climbing and outdoor pursuits (February 20-28) at the Centennial Theatre in Lonsdale and Pacific Cinematheque. Details at


Robert Alstead maintains a blog at

The magic of music

A surgeon struck by lightning and a dancing parrot hold clues to music’s profound effects

by Geoff Olson

Film producer Mark Johnson was on his way to work one day when he heard two monks playing music in a New York subway. One played a nylon guitar and the other was singing in a language the producer didn’t understand. In a recent PBS interview with Bill Moyers, Johnson recalled that a few hundred people had gathered around, spellbound by these robed figures. He said he was struck how all of these strangers, all travelling their separate ways, had been brought together by music.

Some time later, Johnson was walking in the streets of Santa Monica when he heard a musician playing a song on the street. He was so moved by his performance that he approached the singer, Roger Ridley, and asked if he could return with some recording equipment and some cameras. He told Roger that he would love to take this song around the world and add other musicians to it.

Johnson says he isn’t sure if he chose Ben E. King’s classic ballad, Stand By Me, or if it chose him. Travelling around the world with Ridley’s bare-bones vocal performance of the song, he enlisted others to contribute, from blues singers in post-Katrina New Orleans, to a South African choir, to a Moscow chamber group. Adding their multiple layers of instruments and vocals, Johnson built the voice of one unknown street musician into a polyrhythmic hymn of shared humanity. Johnson’s 10-year musical adventure is portrayed in his documentary Playing for Change: Peace Through Music.

The universal language of Homo sapiens was, is, and forever will be, music. As a species, we are moved both emotionally and physically by the sounds we make. Somehow, pressure waves in the air, no more substantial than the flutter of a hummingbird’s wings, can elicit anything from tears to tapping feet. The word “enchantment,” derived from the Latin incantare, means to chant or sing a spell. This archaic word connects beauty and the supernatural with song – the earliest and most persistent form of magic.

In his book, This Is Your Brain on Music (2006), cognitive psychologist and record producer Daniel J. Levitin alludes to a kind of Sardinian a cappella music, in which if the four male voices are perfectly balanced, a fifth female voice is conjured in the listener’s mind. The Sardinians explain this voice as the Virgin Mary. And while there are other secular explanations available for this phenomenon, should we be concerned that analyzing music scientifically may detract from the aesthetic appreciation of it? The Ancient Greeks didn’t think so. Pythagoras and his followers drew no great distinction between science and art, or between music and mathematics. They believed the mathematical regularity of chord sequences was a key to the structure of the universe itself: a “music of the spheres,” in which harmony united everything from planetary movements to birdsong.

Modern-day scientists, however, aren’t much more definitive than the Pythagoreans when it comes to understanding music and the mind. “The thrills, chills and tears we experience from listening to music are the result of having our expectations artfully manipulated by a skilled composer and the musicians who interpret that music,” writes Daniel Levitin. But this is trivially true, offering no real explanation for how emotions can be conjured by a sequence of notes. The nineteenth century composer Mendelssohn was a bit more helpful with his claim that music has “not thoughts that are too vague to be put into words, but too precise.”

Aldous Huxley echoed the composer’s ideas about music. In the early 1930s, the British writer was on holiday in the Mediterranean. On a moonless June night “alive with stars,” he groped about in his dark guesthouse for a record to play. He put on the introduction to Beethoven’s Missa Solemnis, theBenedictus. Later, in his book Music at Night, Huxley wrote the following: “The Benedictus. Blessed and blessing, his music is in some sort the equivalent of the night, of the deep and living darkness, into which, now in a single jet, now in a fine interweaving of melodies, now in pulsing and almost solid clots of harmonious sound, it pours itself… like time, like the rising and falling trajectories of a life.”

What was Beethoven trying to say in the symphonic language of theBenedictus? Huxley felt it was the composer’s idea of “a certain blessedness lying at the heart of things.” This was something Beethoven could only communicate nonverbally, through composition.

Describing any kind of music is like trying to describe a watercolour to a blind man. As the playwright Tom Stoppard once said of music critics, “Writing about music is like dancing about architecture.” Not surprisingly, with something so fugitive in meaning, but so personally meaningful, scientists have had difficulty in explaining the origins of music. Finding an “evolutionary purpose” for musical talent remains a guessing game.

The granddaddy of evolutionary thought, Charles Darwin, thought that music was a kind of showing off to the opposite sex, the auditory equivalent of a peacock’s tail. Echoing Darwin, Levitin argues that music is something that male humans developed as a way to demonstrate reproductive fitness. (Rock n’ roll, anyone?) To Harvard psychology professor Steven Pinker, music is “auditory cheesecake,” the by-product of our species’ freakishly large brains. Just as algebra or chess were never survival skills sharpened by natural selection, music is a complex human faculty that exercises other more functional faculties. We do it because it’s fun – and it’s fun because it builds neural pathways that are shared with more survival-based skills, like rhythmic movement. But it’s still an accidental gift.

Ian Cross, director of the Cambridge faculty’s Centre for Music and Science, rejects Pinker’s explanation as reductionistic and wrong. In an interview forThe Guardian’s “Science Weekly” podcast, Cross points out that we don’t merely engage with music solely by listening, but that it’s also “active and interactive, and something you do, that is embedded in complex, active behaviours.”

Karaoke, raves and Baptist church choirs are all about music as total involvement. In many non-western cultures, there is little distinction between music and dance. For Cross, the evolutionary purpose of music is communal; it fosters social cohesion as a replacement for grooming, a social activity enjoyed by primates.

“Music seems to be extremely good, extremely useful for managing situations of social uncertainty,” says Cross, “and it’s evolutionarily functional in promoting and sustaining a capacity for sociality.”

It shouldn’t be surprising that we share an enjoyment of music with other social animals. This is where Snowball the parrot comes in. If you haven’t seen him in action yet, check out this white cockatoo’s performances on YouTube, where he bops along to his favourite songs, Everybody (Backstreet’s Back) by Backstreet Boys and Stevie Nicks’ Edge of Seventeen. On the latter song, the screeching Snowball shakes his head back and forth, kicks his legs out, and at one point, appears to tap one claw on the downbeat.

Aniruddh Patel, a senior fellow at the Neurosciences Institute in California, received a link to Snowball from a friend and decided to test if the cockatoo was really dancing. He got in touch with Snowball’s owner, Irena Shulz, asking if she would help him study the parrot. Patel sent her CDs of the bird’s favourite Backstreet Boys track at different tempos and had her videotape his routines. He then graphed Snowball’s moments against the varying beats. Patel discovered that the frequent moments that Snowball locked onto the beat weren’t by chance. They demonstrated sensitivity to rhythm and an ability to synchronize to it.

Snowball’s paradigm-busting performances appear to hinge on those skill sets shared by parrots and human beings alike: vocal learning and imitation. Like us, parrots are highly social animals with brains wired to interpret sounds and coordinate the complex movements of vocal organs to reproduce them. Perhaps we have more in common with the avian world than we think. In Kerala, India, the chants of Brahmin priests mystify experts. They bear no resemblance to any known language or music, but rather to patterns found only in bird song. Some believe these chants are part of an oral tradition that may predate language, going back beyond the first Indo-European peoples.

Whatever our evolutionary or neurological fellowship with birdbrains, it’s impossible to witness Snowball’s YouTube performances without recognizing his sheer joy. He’s obviously enchanted with the music and his own dancing. Similarly, human performers and their audience can fuse into one body of rhythmic celebration, as anyone knows who’s been to a particularly memorable rock concert or rave. Music can capture the attention in an “eternal present” that is comparable to sexual ecstasy or mystical states.

This timeless dimension of music was poignantly reflected in Prisoner of Consciousness, a BBC documentary about a brain-damaged musicologist studied by neurologist Oliver Sacks. The patient, Clive Wearing, was stricken with a severe brain inflammation that left him with a memory span of only a few seconds. Without a recognizable past, and unable to imagine a future, Wearing once told his wife his purgatorial life was “like “being dead.” Although he can never remember her, each time he sees her he is thrilled.

Asked to play a Bach prelude, Wearing initially says he doesn’t know any, but he still summons one up when he is at the piano. By way of explanation, Sacks suggests that musical recall is not quite like another kind of memory: “Remembering music is not, in the usual sense, remembering at all… Listening to it, or playing it, is entirely in the present.”

In his most recent book, Musicophilia, Sacks notes the well-known health benefits of music, for both the healthy and the sick. It is a remarkable thing that, even in the worst cases of dementia, “there is still a self to be called upon, even if music, and only music, can do the calling.”

Sometimes, music acts like a force or a personality, in and of itself. In his book, Sacks profiles 42-year-old Tony Cicoria, a surgeon who was hit by lightning. While Cicoria was resuscitated and made a full recovery, this rock music fan was subsequently seized with an unaccountable and newfound interest in classical piano music. He sought out CDs and then a piano, teaching himself to play. Within three months, his mind was overwhelmed with music that seemed to come out of nowhere. Ten years after his electrifying encounter, Cicoria is still as obsessed with classical music, but uninterested in using the new brain-scanning technologies to understand his condition. He insists it is a “lucky strike” and that the music in his head is “a blessing … not to be questioned.”

The link between music and emotions is difficult to quantify. Neurologist Manfred Clynes is one of the very few scientists to have studied the “touching” aspects of music. In addition to having more than 40 patents credited to his name, the Vienna-born neurologist is also a concert pianist who has recorded superb versions of Bach’s Goldberg Variations.

In a bizarre series of experiments, the inventive Clynes asked subjects to apply finger pressure on a button to express emotions. The subjects consistently displayed the same gradients of force for different emotions. Anger, for example, is a short, sharp stab on the button. Joy is a soft pressure with a quick release. When Clynes plotted out these gradients and played them back electronically, the results were astounding. The simple tones “sounded” joyous, angry or grieving.

Clynes then tried the same experiment in reverse. Subjects were taught the different pressure gestures corresponding to emotional states, without being told what they meant. Most were able to correctly match them later with their corresponding emotional states. In one of Clynes’ experiments, aborigines in Central Australia were able to correctly identify the specific emotional quality of sounds derived from the touch of white, urban Americans. A Wikipedia article about Clynes suggests he has hit upon music’s Rosetta Stone, discovering the “biologically fixed, universal, primary dynamic forms that determine expressions of emotion that give rise to much of the experience within human societies.”

Clynes’ musical research is revolutionary and Sacks’ medical prose lyrical, but other scientific literature on music and the mind seems to fall short. I’m left with the impression of a group of blind men in white coats, feeling an elephant with their hands, each giving a tactile report on a different body part – tail, ears and legs – but never getting a fix on the complete beast. There is “explaining” and then there is “explaining away.”

In his study of college singers at the University of California, psychologist Robert Beck found that singing boosts compounds that create a sense of happiness and well being. Singing produces immunoglobulin A, a hormone that counters the stress hormone cortisol. But since every mood appears to have an associated neurochemical, and everyone knows music makes us feel good, is this any more than a peer-reviewed tautology? To give another example, do any of us seriously think our understanding of “love” is fully contained by the description of it as “endogenous production of endorphins?”

The problem comes down to two separate domains: language and music. Although they are connected through song, there is still a divide, according to Aldous Huxley. “Music says things about the world, but in specifically musical terms. Any attempt to reproduce these musical statements in our own words is necessarily doomed to failure. We cannot isolate the truth contained in a piece of music, for it is a beauty-truth and inseparable from its partner. Only music, and only Beethoven’s music, and only this particular music of Beethoven, can tell us with any precision what Beethoven’s conception of the blessedness at the heart of things actually was.”

Philosopher Alan Watts insisted that music refers to nothing other than itself. He believed that music is so engaging and powerful precisely because life, and the cosmos it’s embedded in, is a dynamical pattern of waveforms – exactly what music is. In The Tao of Philosophy, Watts notes that the point of a musical composition isn’t the finish, as in a footrace or the solution to an equation. If it were, he says, “People would go to the concert just to hear one crashing chord.” The same applies to dancing. “You don’t aim at a particular spot in the room where you should arrive. The whole point of dancing is the dance.”

Yet, early in life, we are tricked into the belief that life is a race, Watts says, with a string of goodies strung along from primary school to the world of adult employment, benchmarks for status and success. This process can end with the struggling wage slave “in some racket… selling insurance.” We may finally reach a place of social standing and economic security, but we feel vaguely cheated. And we were.

According to Watts, “We have simply cheated ourselves the whole way down the line. We thought of life by analogy – as a journey or pilgrimage – which had a serious purpose at the end. And the thing was, to get to that end, success, or whatever it is, or maybe heaven after you’re dead. But we missed the whole point all the way along. It was a musical thing, and you were supposed to sing… or to dance while the music was being played.”

If he could put it into words, Snowball the parrot would surely agree with Watts, Huxley and Mendelssohn. Music isn’t so much a problem to be solved as a mystery to be lived.

Cinema as therapy


Scene from Waltz with Bashir

Israeli director Ari Folman, a draftee during Israel’s invasion of Lebanon in 1982, wanted to tell a story about his wartime experiences, but he realized that “no one would want to watch a middle-aged man telling stories that happened 25 years ago without any archival footage to support them.” So he took the unusual step of making an animated documentary.

The four years it took Folman to make the autobiographical Waltz With Bashir (Vals im Bashir) was a kind of therapy, as he sought to unlock repressed memories of that episode in his life through interviews with friends and former comrades. Each of the former soldiers coolly and almost matter-of-factly recalls the horrors and stresses of combat, both as it happened and as it affected them in the ensuing years.

The resulting arrangement of original interviews put to comic book style visuals is at once haunting, dreamlike and beautiful in its imagery, through a combination of Flash, classic animation and 3D. “It was shot in a sound studio and cut as a 90-minute length video film. It was made into a story board and then drawn with 2,300 illustrations that were turned into animation,” Folman explains. The visual style is simple but effective and while it doesn’t use rotoscope animation, where artists illustrate and paint over video images, it does have that naturalistic aspect to it.

Animation allows Folman to decompartmentalize the worlds of dream, memory and reality, showing how each is more closely connected than we normally acknowledge, something that normal video could not accomplish here. Each of the interviewees has powerful images that they carry within them. One has the recurring nightmare of being chased by a pack of snarling dogs. Another remembers the feeling of peace as he floated at sea after swimming away from an ambush that wiped out the rest of his squad. Folman, himself, frequently sees a recurring scene – possibly a memory – where he and two comrades emerge naked from the sea in a war-torn Beirut. They then dress and walk into a street of wailing Palestinian women running toward them.

Folman’s search for the blanks in his memory leads him to an understanding of Israel’s role in the massacre of an estimated 3,000 refugees in the Sabra and Shatila refugee camps in Beirut. The title of the film, incidentally, is taken from the then president-elect of Lebanon, Bachir Gemayel, whose assassination in 1982 led to Phalangist Christian militias exacting their horrendous revenge. Along the way, the film vividly conveys the tragedy and enduring psychological damage caused by war.

Waltz With Bashir is Israel’s foreign language submission for the Oscars and it was nominated for a Golden Globe in the same category in December. There are a number of Globe nominees among this month’s new movies.

In The Reader, Ralph Fiennes grapples with his conscience when, after the Second World War, he discovers that his first love, a blonde Kate Winslet, was a Nazi concentration camp guard.

The Curious Case of Benjamin Button, an adaptation of F. Scott Fitzgerald’s 1920s story, retells the adventures of a man (Brad Pitt) who is born old and ages backwards. The film, which also stars Cate Blanchett, has been nominated for five Golden Globes.

Among the flurry of romantic dramas out this month is the pairing of Dustin Hoffman and Emma Thompson in Last Chance Harvey (January 23). Hoffman is an over-the-hill jingle-writer, who, while visiting London for his daughter’s wedding, strikes up an unexpected relationship with an unhappy, aspiring writer played by Thompson. Both actors were nominated for Globes for their performances. The Globes ceremony takes place January 11.


Robert Alstead maintains a blog at

The Gift: the nature of real abundance

by Geoff Olson

Although it’s a relatively obscure book, The Gift: Imagination and the Erotic Life of Property is considered something of an underground classic in literary and artistic circles. Canadian writer Margaret Atwood reportedly keeps half a dozen copies of Lewis Hyde’s book on hand for friends and acquaintances. Other fans of The Gift include writer Zadie Smith, Michael Chabon, Jonathan Lethem and singer-songwriter Bruce Cockburn, who was inspired by the book to write a song of the same name.

In a recent article in The New York Times Magazine, Daniel B. Smith observes that The Gift has been “adopted as something like the theory bible” of the Burning Man festival, a yearly gathering of artists in the Nevada desert where money is replaced by barter. Video artist pioneer Bill Viola says he remembers New York artists exchanging dog-eared, marked-up copies of Hyde’s book back in the eighties. My personal copy, which I found a few years ago in a used bookstore in Vancouver, looks like it has been through the wringer, literally. It’s underlined in pen and pencil throughout and the back pages are corrugated from water damage. It obviously passed through a few hands before it got to mine.

It’s difficult to summarize this cross-disciplinary, yet lyrical book. The first part of The Gift examines patterns of gift exchange in aboriginal societies. The second part explores the role and place of creative artists in a market-oriented world. In essence, Hyde’s book is one of the few studies ever made of the cultural anthropology of giving. It’s an ode to abundance, at both the communal and psychic level. Hyde’s work was partly inspired by the work on reciprocity by sociologist Marshall Sahlins, one of the first academics to question the classic definition of economics as “the science of choice under scarcity.” Sahlin’s words, quoted in The Gift, are as relevant today as they were in 1924:

“Modern capitalist societies, however richly endowed, dedicate themselves to the proposition of scarcity…The market-industrial system institutes scarcity, in a manner completely unparalleled and to a degree nowhere else approximated. Where production and distribution are arranged though the behavior of prices, and all livelihoods depend on getting and spending, insufficiency of material means becomes the explicit, calculable starting point of all economic activity.”

Early economists Paul Samuleson and Milton Friedman both begin their textbook examination of economics with the “Law of Scarcity,” and, as Hyde dryly observes, “It’s all over by the end of Chapter One.” In contrast, the work of early twentieth century anthropologists like Franz Boas and Bronislaw Malinowski demonstrates that the person in aboriginal cultures deemed worthy of respect and adulation is not the one who accumulates the most possessions, but the one who gives them all away. Among the Trobriand Islanders, Hyde discovered, it could take as long as 20 years for a necklace or armband to circulate around the islands and return to its original owner. Such objects were never intended as possessions to be hoarded, but rather as prizes to cherish for a time and then pass on.

Hyde determined that, in aboriginal societies, “gifts are a class of property whose value lies not only in their use, but “which literally cease to exist as gifts” if they are not understood as part of a communal network of reciprocal relationships. They are material expression of immaterial sympathy. Even though gift cycles were never the sum total of aboriginal market relations, early explorers and settlers were puzzled by exchanges that generated no discernible profit.

In the first colonies of Massachusetts, the Puritan settlers were so puzzled by the natives’ unique concept of property that they gave it a name, which had long been in circulation by the time Thomas Hutchinson’s 1764 history of the colony. “An Indian gift,” he told his readers, “is the proverbial expression signifying a present for which an equivalent return in expected.” Hyde points out that the opposite of “Indian giver” would be something like “white man keeper” or “capitalist.” In other words, “a person whose instinct is to remove property from circulation, to put in a warehouse or museum (or, more to the point for capitalism, to lay it aside to be used for production.)”

The gift, by its nature, breaks down boundaries. This has been its principle function in archaic societies: to put the tribe into accord, not just with one another, but also with the larger world of animals, spirits or gods. This is obviously not comparable to western gift giving, which usually entails two individuals exchanging a gift. According to Hyde, the minimum number for a gift circle is three.

Australian Aborigines commonly refer to their own clan as “my body,” using a personal expression of enlarged identity – just as we do in a marriage ceremony when we speak of “one flesh.” “When we are in the spirit of the gift, we love to feel the body open outward,” the author adds. In contrast, the assumptions of modern-day market exchange “may not necessarily lead to the emergence of boundaries, but they do in practice.” Today, these boundaries are even more obvious, not just in the enormous disparities between rich and poor nations, but within these nations themselves. And there are other more subtle boundaries, such as the walls we create between one another and within our own hearts and minds, as we internalize the values of commodification. The word “citizen,” which connotes communal participation, nets a little over a million hits in Google, while the word “consumer,” which connotes social isolation and material attachment, nets almost three million. That’s an intriguing measure of how we’ve come to define ourselves, at this critical point in planetary history.

With the academic assumption that all human relations take place within a matrix of diminishing possibilities, it’s no surprise that the world dominated by electronic capital has come to resemble its theoretical foundations in scarcity. They don’t call economics “the dismal science” for nothing.

But is scarcity really such a permanent condition for human beings? As American author and philosopher Robert Anton Wilson once observed, “Known resources are not given by nature; they depend on the analytical capacities of the human mind. We can never know how many resources can be obtained from a cubic foot of the universe: all we know is how much we have found thus far, at a given date. You can starve in the middle of a field of wheat if your mind hasn’t identified wheat as edible. Real Wealth results from Real Knowledge, which is increasing faster all the time.”

Technological invention depends on a class of cultural creatives not included by Hyde in his book: inventors, technicians and scientists. Yet we owe the makeup of the modern world almost entirely to their ambiguous gifts to society, from penicillin to plutonium, from airbags to armaments. And in recent years, the worlds of artists and technicians have begun to merge with digital technology. The accelerating pace of change has kept the cultural creatives, from songwriters to computer animators, scrambling to find their place in a fast-changing world. And as media monopolies look at their plunging circulation and sales figures, regrouping and selling off their failing properties, the Internet has remained an open portal for a wide range of creativity.

Ironically, the Internet is the closest thing we have today to aboriginal gift cycles. In spite of its downside, it has come to embody the ancient, archetypal habit of giving freely to strangers. From message boards to “wikis,” Internet users are willing to help each other, even though they don’t really have to and they don’t get “paid” for it in credit. The open-source movement, in which anonymous programmers tinker with and improve publicly accessible software code, and the “CopyLeft” movement to introduce a “creative commons” for freely distributed artistic works, defy not only traditional market economics, but all previous expectations of how people are supposed to behave in a market economy. Who voluntarily works for free, wanting only to contribute to a greater good? Millions, apparently. Homo economicus is not supposed to act this way.

But can anything considered traditional wealth come from offering services for free, as gifts? It’s all well and good for those who have the leisure or financial security to contribute for free. But how can any viable economic model emerge from such altruistic activity? Or could it be that our ideas about economics are limited, or even false? “Basically, it’s the problem that occurs when people focus too hard on the idea that economics is the study of resource allocation in the presence of scarcity. That only makes sense when there’s scarcity – and in digital goods, scarcity doesn’t exist,” notes blogger Mike Masnick in his Tech Dirt column.

Masnick is referring to how the digital age has brought about endless copying of movies, songs, software programs and other intellectual property. The costs of reproducing these creative works have essentially dropped to nothing, now that they can be reduced to a string of ones and zeroes. Prior to the digital age, an average civic cinema could only show films that would attract a little more than a thousand people over two weeks, most of whom live within a few mile radius. For the most part, this has limited screenings to major distribution films. Yet DVDs extend the shelf life of films, with the cost of a rental less than that of a movie ticket. With digital downloads, the costs per movie shrink further but the potential orders increase as well.

Wired editor Chris Anderson calls this tapering of cultural production “the long tail.” EBay is another long-tail business, Anderson says: “It is not about auctioning a few old masters for 20 million pounds apiece; it’s about providing a market where huge numbers of people can sell almost anything for a couple of quid.” There are physical limits on how many titles a shop can stock or a cinema can screen. But in a digital age, there are no such limits. Abundance, paradoxically, could be highly disruptive force in the traditional economy, as file-sharing networks and DVD knockoff shops have demonstrated. A number of tech bloggers are calling for a “new economics of abundance” so that civil society can shape its influence without legislatively killing its spirit.

It may sound absurd to speak of “abundance” in a time of a global economic collapse, environmental crisis, naked political opportunism and endless resource wars. Not only that, to praise technology uncritically is as one-sided as the patronizing worship of aboriginal cultures. Digital technology can isolate its users as much as it can connect them. If there really is an emerging economics of abundance, it will likely be a double-edged sword, with new problems of its own. But there is also the possibility it may offer an alternative to the worst excesses of monopoly capitalism and privatized kleptocracy. The way into a better future is to make past ways of doing things obsolete.

The real irony is that classical economics has always promised abundance through the management of scarcity. The SUV, the 50-inch television and the McMansion in a gated community have certainly been signifiers of middle-class comfort, but not sustainable wealth or social capital. In the past century, even reciprocal gift giving has been co-opted by the market, with the ostensible warmth and sentimentality of the Christmas season belied by the retailers’ bottom line, and the perfunctory mass-march of consumers for Christ.

Scarcity economics has both authorized and valorized our methods for emptying the world of its natural capital, while ensuring indebtedness – personal, national and ecological – is the norm. So it’s no accident that this dark vision of the world has become a self-fulfilling prophecy. With the threat of very real scarcity looming on the horizon – not just in credit, but in arable land, fresh water and other species – never before have so many of the world’s people been so ready for new ways in thinking and organizing their lives.

And the new ways are having an effect, at least in the area of power production. Almost weekly, there is news about leaps in the efficiency of solar power technology, as the costs of solar and wind devices continue to plunge. Solar power use is doubling every two years and will be the dominant form of energy source within the next 20 years, according to respected inventor and author Ray Kurzweil. Sunlight can’t be metered and it’s hard to imagine nations going to war to grab an enemy’s photons. Solar will soon be price competitive with the cheapest form of energy: coal. There is no way that “King Cong” – coal, oil, nuclear and gas – can compete with nature’s other bounty, with the gift we’ve always had all around us, its access limited only by our imaginations. The current economic downturn, along with the plunging price of oil, may slow the acceleration of this trend for a time, but as long as civilization lasts, it is unlikely to be anything but exponential and socially transformative.

Some argue that even without theorizing new technologies, it is conceivable that there already exists enough energy, raw materials and biological resources to provide a comfortable lifestyle for every person on Earth. That may well be so, but if further technological advances are only to serve further population growth, as they have in the past, the gains will eventually take us back up against the biosphere’s natural limits. Technological advance has to serve a higher purpose than endless growth. The mind must come into accord with the heart, and here Hyde’s work is instructive. Ancient patterns of communal gift giving acknowledge the true sources of wealth:

“Every participant in the (gift-giving) cycle literally lives off the others with only the ultimate energy source, the sun, being transcendent. Widening the study of ecology to include man means to look at ourselves as part of nature again, not its lord. When we see that we are actors in natural cycles, we understand that what nature gives to us in influenced by what we give to nature. So the circle is a sign of an ecological insight as much as of gift exchange. We come to feel our selves as part of a larger self-regulating system.

“And where we have established such a relationship we tend to respond to nature as part of ourselves, not as a stranger or alien available for exploitation. Gift exchange brings with it, therefore, a built-in check upon the destruction of its objects: with it we will not destroy nature’s renewable wealth except where we also consciously destroy ourselves.”

If the economy of abundance isn’t strangled in its cradle – and it looks like we’re too far down the road for that to be possible – can it find rapprochement with the economy of scarcity, or even displace it entirely? Beyond that, there’s the question of what forms it will take, and if we can join the archaic wisdom of aboriginal gift cycles with the promise of computer technology. Some futurists have floated the idea of “energy credits” or some other notational unit to replace money. Others see nanotechnology, automated manufacturing at microscales, as freeing humans at last from the boom-bust cycles of scarcity capitalism. But, at this stage, it’s too early to see anything other than the vaguest outlines of a world evolving past free market monopolies and defunct, Soviet-style central planning. Given the many threats on the horizon, we may never get there, but it’s the business of the future to be unknown.

“Greed and completion are not the result of immutable human temperament, writes Bernard Lietaer, founder of the EU currency system. “Greed and fear of scarcity are in fact being created and amplified…the direct consequence is that we have to fight with each other in order to survive.”

Ultimately, the human relationship with the world is in part conditioned by how we interpret it – as one principally of scarcity or abundance. Perhaps one day we’ll realize we’re the custodians of life, but not its keepers, and we can wave goodbye to this shadow realm of hungry ghosts, fighting for pieces of paper decorated with the portraits of dead leaders. Singer-songwriter Bruce Cockburn summed up the distinction between these competing visions in his Lewis-inspired song, The Gift:

In this cold commodity culture
Where you lay your money down
It’s hard to even notice
That all this earth is hallowed ground
The gift keeps moving
Never know where it’s going to land
You must stand back and let it
Keep on changing hands.

Ripping tales


Scene from RiP: A Remix Manifesto

Intellectual property rights is one of the most vexing issues of the digital era. People on different sides of the planet exchange music, software, images, TV shows and even entire movies over the internet. Traditional media companies are terrified; the old business model has been predicated on big media being able to control the distribution channels – CDs, DVDs, TV and so on – but digital technology and the internet have changed everything. Users are becoming more sophisticated at ripping, editing and sharing digitized content for free across the wires, using peer-to-peer software. It may not always be strictly copyright legal, but as media conglomerates are discovering at great expense, there’s little they can do to prevent this growing trend.

RiP: A Remix Manifesto, a feisty, NFB-produced documentary showing at the Whistler Film Festival December 4-7, is a call to overhaul copyright laws. As the title suggests, RiP is particularly interested in the legally grey area of remixing existing works, although director Brett Gaylor also introduces individual mom ‘n pop downloaders who have been stamped on by the heavy boot of the litigious music industry. The group includes high school kids, a Texan pastor and Jammie Thomas, the single mom ordered to pay the recording industry $222,000 for allegedly downloading 24 songs. By criminalizing its customers, the music industry has set itself up for attack and Gaylor has great fun mocking its bully-boy tactics.

RiP focuses on trendy, laptop musician Girl Talk, aka Gregg Gillis, a Pittsburgh biomedical engineer who mashes-up hundreds of samples from other artists’ works into his own distinctive compositions. The film suggests that artists have borrowed from their predecessors since time immemorial and that digital mash-ups are just an extension of that. What’s more, the cost of getting clearance for Girl Talk to perform the songs would be prohibitive. So he doesn’t, although the threat of litigation always hovers over his head. Gaylor memorably makes the point about how copyright is stifling creativity by teasing us with footage of a Girl Talk gig where everyone is clearly having a great time (including Paris Hilton), but the soundtrack is muted. He uses the same device with the song Happy Birthday – owned by Time Warner – to show how absurd copyright law can be when taken to its natural conclusion.

This is the kind of film where everyone is either a villain or hero. Metallica and the Rolling Stones come off badly as big-business recording artists, while Radiohead, which released its album direct to the web for whatever price fans wanted to pay for it, appears progressive. Star interviewee is Lawrence Lessig, the Stanford prof who came up with the ubiquitous Creative Commons licence and helped make redefining copyright laws one of the blogosphere’s causes célèbres.

Manifestos aren’t subtle things; big media is not quite as loony as it appears here. Some artists won’t warm to the message “Times are changing; get used to it,” but RiP’s campaign-style approach still pays off with an entertaining 80 minutes complete with snappy, video mash-ups and montages. Look for Rip in cinemas this spring. You can contribute to a remix of the film at


Robert Alstead maintains a blog at

From scarcity to abundance

by Geoff Olson

Many of us consider philosophy to be a specialized field of study, with little real-world application. Yet we’re all philosophers of one kind or another. We all have our own ideas about love, freedom and the meaning of life – or its non-meaning. These ideas, though not always articulated, often guide our lives to a surprising degree.

Just as fish don’t have any notion of the medium they swim in, one particular belief system so thoroughly pervades our culture that most of us would be hard-pressed to identify it as a philosophy at all. This is the notion that life is defined by a competition for dwindling resources. The philosophy of scarcity has dominated cultural life in the West – and academia, business, government, the military and beyond – for the past few hundred years and pervades everything from PBS nature documentaries to reality television shows like The Apprentice and the Survivor series. Its essence is summed up by hard-nosed realists and their dictum “There is no free lunch.”

As a philosophy, scarcity is given substance by real-world examples. Oil, water, food, money: all appear to be in perennially short supply, as expressed by the recent meme, “peak everything.” Famine, drought and wars over territory make scarcity seem the norm for the planet, rather than the exception. But how much is our perception of scarcity driven by a cultural consensus that it is fundamental to existence? There is a real world out there, a world that often fails to deliver us the goods, but there’s no denying that our relationship to it is conditioned by our beliefs and interpretations.

For some time now, a different idea has been brewing in popular culture: the philosophy of non-scarcity, or abundance. The exploration of this idea, however, has been mostly limited to extropians and science fiction writers and ignored by academia. “Abundance” has been a word relegated to evangelical and new age groups.

In his blog, Wired editor Chris Anderson noted this absence from academic dialogue: “My college textbook, Gregory Mankiw’s otherwise excellent Principles of Economics, doesn’t mention the word abundance. And for good reason: if you let the scarcity term in most economic equations go to nothing, you get all sorts of divide-by-zero problems. They basically blow up.”

One of the greatest shifts in human thinking came with the discovery that the world was not flat, but round. This implied that the finite globe could be circumnavigated and its territories mapped and conquered. In the early 1600s, Queen Elizabeth founded the East India Company, a mammoth trading monopoly that was given charter rights to create proprietary colonies anywhere on Earth. The East India Company was both the Halliburton and Blackwater of its time. It mapped out and mopped up the resources of distant lands, while encouraging the inhabitants to become pious, proto-Britons, or at least compliant widgets in its worldwide labour machine.

Lieutenant Fletcher Prouty, author of The Secret Team, notes how the East India Company founded Haileybury College in England to “train its young employees in business, the military arts, and the special skills of religious missionaries. By 1800, it became necessary to initiate the task of making an Earth inventory, that is, to find out what was out there in the way of natural resources, population, land, and other tangible assets.”

The first man put in charge of this vital census was Robert Malthus, head of the department of economics at Haileybury College. He is remembered today as the prophet of scarcity, author of the enormously influential 1798 Essay on the Principle of Population. In this treatise, he proposed, “Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetic ratio.”

In other words, unchecked population growth always exceeds the growth of means of subsistence. In modern parlance, we call it the “carrying capacity of the environment.” The actual population growth is held in place by “positive checks” – starvation, disease and other disasters – and “preventive checks” – postponement of marriage, contraception and other practices that reduce the birth rate.

A certain young naturalist, having recently returned to England from the Galapagos Islands, had an ah-ha moment when he came across Malthus’ essay. Surely, constraints on population acted as the driver of animal adaptation through a “survival of the fittest.” Charles Darwin introduced his revolutionary theory of evolution through natural selection with the 1859 publication of On the Origin of Species.

Both Malthus and Darwin have received a bad rap over time. But the problem wasn’t so much with the signal as the reception. Malthusians and Darwinists didn’t just seize on the new thinking to justify the status quo; they found entirely new ways to rationalize brutality. The monstrous legacy of eugenics in the US and Germany, along with the pseudoscientific justifications for racial desegregation and the sterilization of “mental defectives” – to say nothing of the “ethnic cleansing” – owe much to self-serving interpretations of Malthusian/Darwinian ideas. And, of course, there’s the perpetual idea that the wealthy and powerful owe nothing to the weak and powerless, which was now given moral authority by supposedly ironclad laws of nature.

The “white man’s burden” and other paternalistic notions about bringing freedom and democracy to indigenous people also owe plenty to this nineteenth-century meme.

Malthus, the first demographer for transnational interests, mapped the world’s resource base. The British Empire did the rest. In a remarkably transparent speech to parliament in 1914, Winston Churchill said, “We are not a young people with innocent record and a scanty inheritance. We have engrossed to ourselves an altogether disproportionate share of wealth and traffic of the world. We have all we want in territory, and our claim to be left in the unmolested enjoyment of vast and splendid possessions, mainly acquired by violence, largely maintained by force, often seems less reasonable to others than to us.”

The idea that might makes right, and its justification through scarcity, still persists today. There’s an enduring current of thought in western culture that we, as individuals, nations or species, adapt and improve through making others lose. Even though evolutionary biology has come to see cooperation as important as competition, the social sciences have yet to catch up. Classical economics still persists in the notion that human beings are “rational utility maximizers,” isolated agents that are driven by nothing more than self-interest. Modelling more subtle forms of behaviour, such as the altruism within families and communities, would simply make the numbers blow up.

One thinker who saw through this self-serving cant was Richard Buckminster Fuller, best known for his contributions to mathematics and architecture, including his “geodesic dome.” With his elfin stature and coke-bottle-thick, black glasses, the Bostonian became an instantly recognizable icon for intellectual adventurism in the sixties. He wore many hats, including that of poet, urban critic, social scientist and global planner. (A decade after his death, an enclosed molecule was discovered that actually follows the “synergistic” geometry Fuller believed would be found on all levels of nature once researchers began to look for it. In his honour, the molecule was named buckminsterfullerene, or “bucky ball.”)

While Fuller believed that properly applied design science could free all human beings on the planet from poverty and ignorance, “advantaging all without disadvantaging any,” he noted that the correct application of these sciences was perpetually held back by “ignorance, fear, and zoning laws.”

Born into a very wealthy Boston family, Fuller had a unique insight into the Malthusian mindset of the ruling class. In the biography, Bucky: A Guided Tour, author Hugh Kenner explained how a “rich uncle did Bucky the favour of taking him aside to explain in Boston’s terms how the world was.” The unpleasant, but unassailable, truth was this: there wasn’t enough to go round for everyone. “This had apparently been proven mathematically, three generations early, when the statistician Thomas Malthus demonstrated exactly how population tended to outstrip resources.”

The balding Brahmins of Boston, like the elite class elsewhere, “had outgrown the era of the Golden Rule, the formulation of a less crowded world.” As Bucky’s uncle explained, “The possessions of the haves were now founded on the destitution of the have?nots, and despite Sunday?school pieties serviceable to placate women, that was henceforth the unalterable state of things.”

In Kenner’s retelling, the rich uncle told the young lad that it was necessary for a rich man “to cultivate enough of the red tooth and the unsheathed claw to ensure that he and his loved ones should be haves. This was not nice, and he need not distress the innocent by talking of it, but there was really no choice.”

It had been established that a man’s chance of passing his life in any comfort was about one in 100. “It is not you or the other fellow,” the uncle explained; “It is you or one hundred others.” To prosper in the Fuller way with a family of five, he would have to slit the throats – genteelly, of course – of 500 others. “So, do it as neatly and cleanly and politely as you know how, and as your conscience will allow.”

By imagining historical necessity and biological destiny were one and the same, Fuller’s relatives had discovered that human evolution had peaked, by good fortune, with themselves. Bucky ended up rejecting their Scrooge-on-steroids reasoning, believing it to be based on nineteenth-century, closed system thinking.

The architect and mathematician believed the world is rung by what he called, “lawyer assisted capitalism.” The original sin of LAWCAP was to believe that the struggle for finite resources condemned the majority of the world’s inhabitants to misery, while providing wealth and comfort to only the most cunning and predatory. Wrong, said Fuller. Since the end of the eighteenth century, technology has “emphemeralized,” increasing the energy yield of resources while simultaneously discovering new resources.

With late Victorian industrialization, steam power supplied work “for free,” beyond human or horsepower and factories could be kept going throughout the night. Malthus foresaw none of this – how could he? – nor could he have predicted the scientific discoveries of the twentieth century, which created entirely new markets and middle class wealth, along with increasingly sophisticated weapons of destruction.

Fuller insisted that population does not increase steadily, but actually levels off when design science extends to all its members. In fact, demographic studies have consistently demonstrated that one of the most significant factors in reducing national birth rates is the education of women.

As American philosopher Robert Anton Wilson once observed, “Known resources are not given by nature; they depend on the analytical capacities of the human mind. We can never know how many resources can be obtained from a cubic foot of the universe: all we know is how much we have found thus far, at a given date. You can starve in the middle of a field of wheat if your mind hasn’t identified wheat as edible. Real Wealth results from Real Knowledge, which is increasing faster all the time.”

So what does the economics of abundance actually look like? We’ll take a look at this next month.

Break out and break ins


Scene from The Boy In The Striped Pajamas

There was not a preview of the teenage rites-of-passage comedy Growing Op before we went to press, but the film should garner more interest than the average Canadian production which is typically in and out of the cinema before you can say “hydroponic lighting system.” Writer-director Michael Melski, who hails from Sydney, Nova Scotia, drew inspiration from news stories of Vancouver grow-op raids. However, while the action takes place in a suburban grow-op, the film is not about drugs. It’s about a teenage boy Quinn – home-schooled and uncertain – trying to find his way in life. Says Melski: “It’s a story about Nature—about a young man growing through change, about the inexorable pull of first love, and the power of family. The long arc of the film is Quinn discovering his true nature.” Growing Op stars Rosanna Arquette (Pulp Fiction), Rachel Blanchard (Flight of the Conchords), Wallace Langham (Little Miss Sunshine), and a newcomer Steven Yaffee (MVP). The soundtrack features many up-and-coming Canadian bands such as punk rebels Teenage Head, Matt Mays and El Torpedo, Joel Plaskett Emergency, Classfied, Jill Barber, Amelia Curran, and Nathan Wiley.

Still with Canadian films, Deepha Mehta’s latest Heaven On Earth is out this month, and has had mixed reviews. The film tackles the subject of arranged marriages through the story of Chand, a young woman who gives up her comfortable Indian community to move in with her socially sanctioned but abusive husband, Rocky. Deeply unhappy, Chand retreats into an inner life based on myth and fairy tales, creating a movie that some critics have called a “muddled” mixture of reality and fantasy.

Fresh from winning last month’s audience award for best film at the Vancouver International Film Festival comes I’ve Loved You So Long (Il y a longtemps que je t’aime). A family drama of guilt and grief, it follows Juliette, a woman coming to terms with her past and present after being released from a 15 year stint in prison. The slow-burn story follows Juliette’s (Kristin Scott Thomas showing excellent command of the French language) gradual rapprochement with her family after her younger sister Léa (Elsa Zylberstein) invites Juliette into her family’s home.

A different kind of captivity is examined in The Boy In The Striped Pajamas(opening on 14th), a powerful holocaust drama based on John Boyne’s bestselling young adult novel. At its centre is Bruno, the eight-year-old son of a high ranking nazi officer at Auschwitz who goes on boyish explorations of a nearby “farm” where all the workers wear “striped pajamas.” In his travels, Bruno befriends a bald-headed boy his age on the other side of the barbed wire fence called Shmuel. Their friendship brings about a sequence of events that leads to a moving and, not unexpectedly, tragic conclusion.

If you are looking for something lighter, Happy Go Lucky is an unusually optimistic, feel-good movie from British director Mike Leigh, who also gave us the excellent but bleak Secrets & Lies and Vera DrakeHappy Go Lucky was developed using improvisational techniques of Leigh’s previous work, with its emphasis on deep characters. The film revolves around Poppy (Sally Hawkins) a chirpy, elementary school teacher in London, England who takes up driving lessons after someone steals her bicycle. When Sally finds herself stuck behind the wheel with a socially awkward instructor, the polar opposite of herself, it is an opportunity for her to shine. The film does depend on you being won over by Polly, but for most people that won’t be a problem. Oscar nominations are already being talked about for Hawkins.

Robert Alstead maintains a blog at

Remembering War

by Geoff Olson

On December 24, 1914, strange things were happening in the battlefield trenches. In the region of Ypres, Belgium, German troops propped Christmas trees on their parapets and decorated them with candles. That evening, they sang out Christmas carols in German to their enemies across the muddy no-man’s land. The British troops responded by singing Christmas carols in English. The camaraderie escalated and soldiers on both sides began to leave the trenches, mingling and exchanging gifts of whisky, jam, cigars, chocolate and the like. The Christmas truce spread down both trenches, according to military historian Gwynne Dyer, “at the speed of candlelight.”

While accounts of this often-told tale vary, all would agree that the Germans initiated the truce. In his book, The Small Peace in the Great War, Michael Jurgs notes that events were kicked off a few days before Christmas when a German regiment lobbed a carefully wrapped package across the no-mans land to the British side. Inside was a chocolate cake, with a note requesting the soldiers to join in an hour-long ceasefire that evening, to celebrate their captain’s birthday.

This mass outbreak of peace on the front alarmed the high command on both sides. They issued orders against fraternization, but it was days before all the men were back in the trenches, returning to the all-important business of killing each other. (In 1915, a similar Christmas truce occurred between German and French troops, and during Easter of 1916, a truce also opened up on the Eastern Front.)

We have Remembrance Day, but where on the calendar do we mark such epochal moments in wartime, when the sacrificial lambs laid down their arms and greeted one another as kindred spirits?

Boomers and their offspring have been lucky enough to live through an extended period of relative peace, following the two great wars. According to the conventional wisdom, our Canadian bacon was saved by the Cold War doctrine of MAD – “mutual assured destruction.” An atomic Sword of Damocles hung over our heads, making conventional warfare a thing of the past. Of course, this is only a partial truth. While it’s certainly likely that nuclear stalement put a crimp into conscription, that didn’t stop the superpowers from playing out their proxy wars across the world, from Angola to El Salvador. The Cold War put diplomatic relations between East and West into deep freeze, but a hot war in the global south sent millions to their graves and created misery for millions more survivors. The fall of the Berlin Wall, and the collapse of communism, momentarily halted superpower brinksmanship, but not much else. The march of war continued through Kosovo, Rwanda, Darfur, Lebanon, The Congo, Afghanistan and Iraq.

Back in the seventies, I was just a naïve kid on the outskirts of Empire, whose closest acquaintance with battle was the TV series MASH and the BBC series The World at War. The sitcom was bloodless and the documentary footage grainy and discreet. The past was buried and the future looked good. The Four Horsemen of the Apocalypse had been unseated and put to work shuffling papers in the Pentagon and Kremlin.

It seemed my parents’ generation hadn’t just defeated poverty, but conventional warfare as well. The price was paid in body counts. Factoring war-related famine and disease, there were an estimated 10 million civilian casualties in World War 1 and 47 million in World War 2. Every year on Remembrance Day, the Commonwealth nations officially commemorate the sacrifices of members of both the armed forces and of civilians in times of war. But the remembering is definitely weighted toward the warriors.

Yet in the final analysis, war isn’t about remembering, but dismembering – separating people from their families and homes, and even their life and limb. For most of history, it has smashed civilian life, paralyzed relief efforts and dehumanized its blunt instrument: the warrior class whose youthful idealism is channelled into the state narrative of heroism.

The Cold War may be over, but we’re still in a hair-trigger situation, especially with the US policy of preemptive nuclear strikes against “rogue states.” In his book War, Dyer observes, “All the major states are still organized for war and all that is needed for the world to slide back into a nuclear confrontation is a twist of the kaleidoscope that shifts international relations into a new pattern of rival alliances.”

Does war come naturally to human beings? Let’s go back thousands of years, before the emergence of civilization. Imagine a group of tribes living together peacefully, in balance with their environment and with one another. Suddenly, there is a dry spell or a collapse of the local food supply. One tribe decides to make some weapons and conquer the next tribe, turning them into slaves. The other tribe has three choices:

1) If they flee, the paradigm of violent tribe expands into their territory.

2) If they submit to slavery, the paradigm of violent tribe expands into their territory.

3) If they build weapons to fight back, the paradigm of the violent tribe expands into their territory.

This is the crux of Andrew Bard Schmookler’s 1984 work, The Parable of the Tribes: The Problem of Power in Social Evolution. In Schmookler’s thought experiment, diplomacy is not an option with the violent tribe, which subverts the surrounding tribes to their paradigm. He believes this is how the heavily barricaded, heavily armed city-states of the ancient Near East emerged. There is little in the archaeological record to contradict him.

Similarly, historian and eco-activist Derrick Jensen holds that civilization is not only inseparable from war; it is war. Expanding city-states required a growing influx of energy and resources from outlying areas, which put them in continual conflict with their neighbours. To defensively arm was interpreted as an aggressive posture, requiring a preventative response. Preemptive strikes predate the Bush administration by thousands of years and arms races are older than Hadrian’s Wall.

The late British scientist Jacob Bronowski described war as “organized theft.” Wars don’t always begin with plunder, but they have nearly always ended with it, whether it was Carthaginian slaves, Incan gold, Nazi rocket scientists, coastal African diamonds or Iraqi oil.

War appears to be an emergent property of complex systems. Ironically, it may come naturally to societies, but not to individuals. It takes a fair amount of programming to counteract our true natures. Dyer notes that even World War 2 commanders discovered their men were often reluctant to kill in combat situations, lifting their weapons up and away from the target when they fired: “When US Army Colonel SLA Marshall finally took the trouble to inquire into what American infantrymen were actually doing on the battlefield in 1943-45, he found that, on average, only 15 percent of the trained combat riflemen fired their weapons at all in battle. The rest did not flee, but they would not kill – even when their own position was under attack and their lives were in immediate danger.”

Military psychology has spent decades determining what it takes to build the perfect warrior. The shaved heads, the drills, the sleep deprivation and the verbal abuse of basic training are meant to break down the pre-existing character and create a blank slate for military programming. Getting civilians onboard requires even more work. With the human costs of the two Great Wars recorded by scholars, recreated by Hollywood and rotated on The History Channel, it’s become more difficult for First World leaders to sell foreign campaigns to civilians. To convince them that war is either laudable or unavoidable takes all the machinery of social engineering: public relations outlets, advertising firms, media, psychological operations departments and faith-based organizations. For the aggressor nations, it’s always the same gig: the respectable convince the gullible that they’re in danger from the unspeakable.

War – what is it good for? Absolutely nothing, according to pop culture. But we have to ask, if something so deadly really works against everyone’s interests in the long term, why does it persist into modern times? Authors often use fiction to reveal unpleasant truths and no one excelled at this better than British writer George Orwell. In his novel 1984, he freely speculated on modern warfare’s ultimate purpose:

“The primary aim of modern warfare is to use up the products of the machine without raising the general standard of living. Ever since the end of the nineteenth century, the problem of what to do with the surplus of consumption goods has been latent in industrial society. From the moment when the machine first made its appearance, it was clear to all thinking people that the need for human drudgery, and therefore to a great extent for human inequality, had disappeared. If the machine were used deliberately for that end, hunger, overwork, dirt, illiteracy and disease could be eliminated within a few generations.”

This approach is a no-win situation for the elites, Orwell claims: “For if leisure and security were enjoyed by all alike, the great mass of human beings who are normally stupefied by poverty would become literate and would learn to think for themselves; and when once they had done this, they would sooner or later realize that the privileged minority had no function, and they would sweep it away. In the long run, a hierarchical society was only possible on a basis of poverty and ignorance… The problem was how to keep the wheels of industry turning without increasing the real wealth of the world. Goods must be produced, but they must not be distributed. And in practice, the only way of achieving this was by continuous warfare.”

And here is Orwell’s slam-dunk conclusion: “The essential act of war is destruction, not necessarily of human lives, but of the products of human labour. War is a way of shattering to pieces, or pouring into the stratosphere, or sinking in the depths of the sea, materials which might otherwise be used to make the masses too comfortable, and hence, in the long run, too intelligent.”

1984 featured three warring states: Oceania, Eurasia and Eastasia. The ever-shifting alliances and wars had one principal aim: to align the people unquestioningly under their respective leaders. The line-up of foreign villains might change, but the propaganda was essentially the same for all three states. Orwell’s nightmare vision looks scarily prescient, given the three blocs we see emerging: the “North American perimeter,” the European Union and an alliance comprised of Russia, Iran and other nations. (Even 1984’s daily “ten minutes of hate,” directed against an ever changing line-up of villains has its modern equivalent in Fox News.)

The so-called “war on terror” is just a new riff on an age-old theme. Our leaders have declared war on an abstract noun – a vaporous enemy can never officially surrender. Perhaps this is why John McCain said last year that US forces might be in Iraq for “a hundred years.” It would also explain why Canada’s defence minister in 2006, Gordon O’Connor, observed, “It is impossible to defeat the Taliban militarily,” a line recently echoed by British Brig.-Gen. Mark Carleton-Smith, who told the Daily Mail that an “absolute military victory in Afghanistan is impossible.” Canada’s former Chief of the Defence Staff General Rick Hillier was even more explicit in a statement reported in the Toronto Star in 2006: “That’s never been the strategy – to defeat them [the Taliban] militarily.”

Orwell again: “In accordance with the principles of double-think, it does not matter if the war is not real. For when it is, victory is not possible. The war is not meant to be won, but it is meant to be continuous.”

But war isn’t solely a political problem; it’s as an existential one. Avoiding it requires more than Kissinger-like realpolitik, and resisting it requires more than a Remembrance Day poppy. War was not buried in the ashes of Hiroshima, Dresden or Coventry, as my parents’ generation had hoped. It’s all around us. Modern consumer society feeds off ongoing, internalized battles: drug and gambling addictions, body image disorders, clinical depression, advertising-driven self-loathing and all the bad craziness of our hyper-caffeinated, overworked, overextended lifestyles.

Orwell’s “continuous warfare” has been softened and projected into our day-to-day lives, with a North American political economy engineered to break the middle class. But it doesn’t stop there. The emerging culture of constant surveillance and expanded domestic policing is starting to resemble the jackboot dystopia of Orwell’s 1984 as much as the doped-up utopia of Huxley’s Brave New World.

The great irony is that, in comparison to people in other parts of the world, we still lead lives of great opulence. For the diaspora of the Third World, war is no metaphor; it’s an ever-present threat. According to Médecins Sans Frontières, there are currently 43 million people across the globe displaced by war. Sixteen million of them are refugees and more than half are from only six nations/regions: 4.6 million from Palestine, 2.3 million from Iraq, 3.1 million from Afghanistan, 552,000 from Columbia, 523,000 from Sudan and 457,000 from Somalia.

In the face of capitalism’s continual crises of overproduction and the mechanical lurch toward war, there appears to be little reason for optimism – except for the fact that never before in history have so many people been linked together, with so much potential for collective awareness. And in spite of any efforts of politicians, policy wonks or police, our information technologies may have reached the stage where they cannot be fully controlled from the top down. With increasing cynicism over traditional sources of media, much more hope is being pinned on cyberspace. For pessimists, the Internet remains little more than an infotainment “Tower of Babble,” a mad profusion of narrow interests. For optimists, it’s becoming something like a Manhattan Project of the human spirit.

As a German prisoner of war, the late author Kurt Vonnegut survived the largest massacre in European history: the firebombing of Dresden. “It was pure nonsense, pointless destruction,” he wrote in his last book, A Man Without a Country. “The whole city was burned down and it was a British atrocity, not ours.” At the end of his days, Vonnegut cast about for meaning for the signature event in his life and all the mass insanity he had witnessed since. “What is life all about?” he asked his sons and daughters. One son, a pediatrician, had a short, precise response. “Father, we are here to help each other get through this thing, whatever it is.”

That’s what the soldiers of the First World War were doing in the few days before Christmas of 1914, helping one another through this thing. Perhaps it was the long stretches of boredom, punctuated by moments of abject terror, which led the German side to try something unheard of. But somehow, for both sides, the tribal circles of compassion expanded out across the enemy lines. In effect, both sides committed an act of spiritual defiance and went off-script from the parable of the tribes. British soldiers exchanged Christmas pudding and cigarettes for German cigars and cake. Both sides sang in their own languages and even improvised games of soccer in the muddy no-man’s land.

Dyer noted, “These were not professional soldiers, after all; six months before they had been farmers or bank clerks or students, and for all the naïve enthusiasms with which they had greeted the war, they had never really wanted to kill anybody, let alone to die. In its inarticulate way, it was the first peace demonstration of modern times.”

Blue Gold and other VIFF gems


Scene from The Atom Smashers

In recent years, there has been a spate of documentaries on the subject of water – its increasing commodification and the greed, corruption and mismanagement surrounding it.

At the heart of Sam Bozzo’sBlue Gold: World Water Wars(at VIFF October 9 and 10) is a strongly held belief that access to fresh water should be a basic human right. Partly educational, with fine little animations explaining how the water cycle works, it’s also a plea to recognize the scale of the problem. Many salutary examples of water privatization are paraded, from the violent ruptures in Bolivia when government ceded its water rights to Bechtel, to grassroots actions in the US that have had mixed success in combatting corporations tapping their water supplies.

The film depicts how, in the world of supply and demand, drought and water pollution are good for big business but bad for the environment. Consider the carbon footprint of desalination plants and truckloads of water criss-crossing the continent, for example. Lest it all become too depressing, the film knits together some good-news stories, such as the story of Ryan’s Well ( and stories of how denuded water systems are being recovered.

Blue Gold doesn’t always get the facts right – water privatization didn’t happen throughout the UK; Scotland and Northern Ireland’s water services remained publicly owned due to grassroots opposition – but it identifies disturbing patterns that we should all pay close attention to.

A flurry of recent media coverage over the Large Hadron Collider (LHC) in Switzerland shone a spotlight on the 40-year-old search for the hypothetical “Higgs boson,” a “God particle” that physicists hope to eventually discover by smashing particles together at very great speeds, in complex and expensive particle accelerators. The documentary The Atom Smashers (October 4, 5, 8) picks up with a US team at the Fermilab laboratory; the team has been working in this field of research for many years as the LHC prepares to come online. As the Bush government slashes away at its budget, Fermilab’s physicists are feeling the pressure to win this subatomic space race.

It’s not exactly clear what millions of dollars of publicly funded research has achieved, which is hardly surprising given the opaque nature of high energy physics, but it also leaves scientists struggling to justify the huge expense in lay terms. Directors Clayton Brown and Monica Long Ross make effective use of black and white animation to explain the workings of the Tevatron, Fermilab’s four-mile tunnel, where the particle smashing takes place. Broadening the focus to include the private lives, aspirations and setbacks of the physicists in Fermilab’s program adds a touch of human interest, underscoring the big question of why science, in general, has lost its value in Bush’s US. It seems that science is facing a serious image problem, however, the answer to the problem seems as elusive as the Higgs boson, itself.

Other VIFF films that look worthwhile include Let the Right One In (October 5, 6, 8), a genre-bending horror that has been getting great reviews on the festival circuit; Tokyo! (October 8, 9), a trio of films set in the Japanese capital by three very capable directors (Michel Gondry, Leos Carax, Joon-ho Bong); I Am Good (October 1, 7, 9), a light comedy from Czech director and VIFF regular Jan Hrebejk, who always impresses with the fullness of his characters; and the closing film The Class, a high school drama set in a poor multicultural Parisian suburb. The film won the Palme d’Or at Cannes this year.



Robert Alstead made the Vancouver-set bicycle documentary You Never Bike Alone, available on DVD at