Monday, March 24, 2008

Database Narrative and the Case of Southland Tales

When a cult following began to encircle Richard Kelly’s Donnie Darko seven years ago, fans took advantage of the story gaps and vague chain of events to expand their own personal interpretation of the story. Did Donnie travel in time to a new dimension in order to reconcile his relationships with those close to him in a way that his untimely death didn’t allow? Is the rabbit the orchestrator of this warp in time and space? Is the “chut up” girl supposed to be God? (believe it or not, I’ve actually heard that last one) These questions resulted in numerous late-night dorm discussions on the metaphysical nature of time, the universe, rabbits, etc., which led an expansive cult network manifested on the Internet, increased DVD sales, and a theatrically re-released “director’s cut.”

Kelly’s follow-up, Southland Tales, attempts to replicate this fanboy attraction to his work via an epically expansive narrative and equally vague (though much more confounding) ideas regarding the repercussions of brief burps in time and space and, not surprisingly, the all-important subject of the apocalypse. The result, however, attempts to be everything and ends up being nothing. It’s an unapproachable combination of Philip K. Dick, Steven Hawking, Terry Gilliam’s Brazil (1985), uninformed leftist politics, the information age, and disposable pop culture.

This is because Southland Tales does not even seek to contain, within the film itself, the entire scope of the story being told. Southland Tales is so packed full of half-executed plot points that it can’t possibly be read on its own—one is required and expected to do their research on the Internet before and after viewing the film in order to fully comprehend its narrative. An expansive multimedia network was developed for Southland Tales before its release (aimed for the media-savvy young audience that appreciated Kelly’s first film) including a three-part comic book series (the film, perhaps nostalgic of Star Wars, encompasses parts 4, 5, and 6 of the overall story—the first three parts are introduced in separate comics) and what was sought to become a complex web experience on the film’s website that never saw fruition because of the film’s infamously bad reception at Cannes and dismal commercial performance at the box office. Thus, most of the potential audience was denied the extrafilmic information needed to fully comprehend its story before Southland Tales was theatrically released.

Southland Tales asks more from its audience than many films do (or should). But Kelly’s film is not the first to create a multimedia narrative to inform a film franchise. The Matrix sequels all but mandated the audience to immerse themselves in the extra information provided on DVDs, the Internet, and video games in order to fully comprehend the series. For example, anybody who did their research by watching The Animatrix before The Matrix Reloaded knew the important origin of the weasely teenager (The Kid) that Neo interacts with at the beginning of the film, but all others were lost in the dark. The recent Cloverfield built such an expansive alternate universe on the Internet that it was difficult for many to discern which websites were intentionally part of the film and which weren’t.


However, the narrative of a single film cannot possibly be expected to contain all the information that an infinite web-based universe can, so the resulting films built with so much multimedia effort and hype could not help but be disappointing in the end. Many were let down to find that, at its core, Cloverfield was simply a typical monster movie with fresh new packaging (unlike the web pages, nobody is returning to the movie itself to gather more knowledge—it made more than half its gross opening weekend). Also, The Matrix Revolutions ended the trilogy with a whimper, not a bang, when the fulfillment of Neo’s prophecy looked relatively simplistic in contrast to the complex theories built by collective fans on the web.

(shameless plug: for further readings on this subject, check out chapter three of Henry Jenkins’ fantastic book, Convergence Culture, or check out his blog)

My experience of Southland Tales oscillated between brief stints of entertained exhilaration at its unapologetically inflated style and narrative, and annoyed bewilderment at its overflowing bombardment of information. Even when ignoring the extra media necessary for experiencing Southland Tales, the film itself is jam-packed with information in each segment of each frame, but virtually nothing signals us as to what is most essential to comprehend. Southland Tales refuses to let the frame capture merely one image at a time, but is itself segmented into many frames: the film’s prologue which introduces the its setting shows several screens and events all at the same time, and most characters inexplicably have their televisions on and laptops open at all times. Audio elements overlap as many voices are heard simultaneously (from characters and media in the scenes); sometimes dialogue seems to come from no particular source, and characters utter befuddling single words that feign significance but ultimately fall short.


Even the tattoos on The Rock—I’m sorry, Dwayne Johnson—present themselves as important symbols (anything from Jesus to Japanese characters) that the audience is assigned to pick up and interpret. In watching this film, I felt like I was sitting in the same chair as Miranda Richardson’s character, watching eight surveillance screens at once and attempting desperately to comprehend them all.

The result is the same as several other films that go to great lengths to expand their narrative to a multimedia universe: what we get is not a film, but a vast series of ideas limited to a typical filmic running time. These ideas are often interesting, but they rarely to come together in a cohesive film. Without attempting to comprehend Southland’s outer universe, the film as it stands alone looks like something that tried to be many things, but couldn’t decide exactly which one it wanted to be. We don’t get a narrative—we get a database with which to pick and choose our own semblance of a narrative.

From the perspective of a film analyst, the result of this task is often disappointing, because the exhausting amount of information accumulated that is necessary to comprehend the overall storyline serves only the film and limited within the film itself. One would think this methodically delivered, seemingly important information would eventually lead to some sort of astounding revelation, but in the end, they only serve a story made up by one or two people who are no smarter than you and me. Just because a movie is expansive does not mean it is complex or significant. In the end, these complicated multimedia exercises say nothing about “real life”—they rarely reveal themselves to be grand allegorical commentaries of society at large. Maybe I’m an idealist or a killjoy, but I can’t help but feel disappointed when all this effort is put into something that, in the end, has no practical significance.

These movies say and do a lot to engage their audience, but rarely do they ultimately “mean” anything. Yet there are numberless movies released each year—that “stand alone” as movies—which have plenty to say about social discourse and the human condition. But database narratives, by contrast, adorn the guise of significance through the importance put on its homework, but the network of various media needed to comprehend the film can’t help but be revealed as merely a new marketing tool. As a result, any semblance of meaning is dumbed down to the immediate needs of the ever-expanding narrative structure. The Matrix was embedded with theological undertones, but these were inevitably revealed as superficial plot devices rather than statements of modern spirituality. Even the overt political landscape of Southland Tales (which seems to think of itself as a postmodern 1984) is merely a setting for science fiction fantasy, not a revealing satire, and its politics are as ill informed as its characters involved with them.

Thus, information that would be essential to the plot of a traditional film is reduced to one superfluous fact among many. In a Q&A with Creative Screenwriting Magazine’s Editor-in-Chief Jeff Goldsmith, when asked if the apocalypse inferentially occurs after the film’s closing credits, Richard Kelly forwardly asks Goldsmith, as if it were plainly clear, “Did you see the tidal wave?” Goldsmith: “The what?” Kelly: “There was a tidal wave behind Justin Timberlake’s character while he was dancing.” Goldsmith: “Oh…nope, I didn’t see it.” The tidal wave, which one would think would be essential to understanding the film’s ending, becomes obscured as one detail within in a frame cluttered with details. Thus, the important facts are not delineated from the disposable ones. And obscurity does not equal complexity—just because it’s difficult to understand your movie doesn’t mean it’s smart. I for one did not see the tidal wave, and I probably won’t go back and look for it, because I know the reward for my effort will not make the film any clearer or more enjoyable.

Such database narratives have sometimes been heralded as the future of fiction cinema, a way to complement our culture of increasing transmedia immersion and information overload within our typical modes of entertainment. But, at least at this point, filmmakers must find a way to sustain their films on their own merit while simultaneously expanding their universe elsewhere. These new forms of narrative experience must make the other media outlets both essential and optional (superfluous?) at the same time in order to work. The homework must not be forced upon the spectator, but an outlet for possible rewards for the more determined fan. Kelly and the Wachowskis must remember that Donnie Darko and The Matrix drew audiences in before their narrative expanded into a multimedia universe, not the other way around.

Database narratives seem to work better on television than in the movies. Lost has been incredibly successful in molding its engaging narrative through audience interpretation of the vast amounts of information given on the show (but when this same team used a similar informational network to bring Cloverfield for the big screens, the result was simply not the same). The framework of Lost has required fans both on the Internet and by the watercooler to keep fresh on the task of uncovering the show’s many mysteries among the vast amounts of given information. (Lostpedia is evidence of just how great this following is, and how seriously they take the show.) Lost is the perfect prototype for television in the era of TiVo, TV on DVD, and Internet exhibition—its viewers benefit in attaining a greater understanding of the show’s narrative universe with the rewind and pause button.

Yet even Lost has suffered repercussions from diving headfirst into this new world of storytelling: many fans, annoyed by a consistent lack of resolution to many of the show’s mysteries, have abandoned it, feeling the writers have no great secret to reveal that will reward them for their efforts (one journalist said Lost forces us to “go down a rabbit hole with no rabbit”). As a result of losing about three million viewers since the its first season, ABC has made a habit of airing reruns with captions that serve as filler to update viewers of certain important facts, and even approaching the impossible task of informing brand new viewers the complexities of the story’s vast cobweb of information. Loyal viewers of Lost (such as myself) view this new “pop-up video” format as doing the homework that viewers were originally asked to do, and making clear connections that were originally intended to be interpreted by such fans, thereby dumbing down the task assigned that made Lost so engaging in the first place.

Even in its perfect model, database narrative still has, and will continue to have, its limits.

Haneke's Funny Games

My emotions and intellect received a shocking jolt the first time I saw Michael haneke's Funny Games (1997). I knew nothing about the film going in, so I was understandably unprepared for the twisted games that were about to be played with my set of expectations. Naturally, I empathized, or at least sided with, the stereotypical bourgeois family, as they were (physically and psychologically) tormented by a pair of relentless young sadists who (as sadists tend to do) take pleasure in the family’s pain (and this very act was their only semblance of motivation). Even when the instigator, Paul, begins breaking the fourth wall—first with a simple look and later with blatant verbal address to the spectator—and the “game” is revealed to be Haneke’s game with us, I still naively desired to see the family overcome this villainous pair. Alas, Haneke’s broad intellectual exercise never faltered or gave in, and the family’s demise was the only clear trajectory for this film to head.

I had known Haneke by this point only by the far subtler Cache (2005), which in itself had one shocking moment that could have easily transposed for a number of moments in Funny Games. However, as I have become more familiar with his work (The Piano Teacher (2001), Code Unknown (2000), Time of the Wolf (2003)), it became evident that ambiguity and endings without closure were a deliberate and essential part to Haneke’s style.

Funny Games, however, stands in stark contrast to these other films because of the comparatively “obvious” way in which the film conveys its meaning. It still contains Haneke’s typically deliberate ambiguity (the boys are given no origin or motivation for their actions) and the ending brings no “satisfying” narrative closure (the boys, after killing the whole family, go onto play further games with other families, thus there is no significance to why this particular family is the one we watch), but Funny Games also seems to have very little subtext compared to his more restrained efforts. It becomes evident by the film’s end that it was not “about” the family or the boys, but was simply the filmmaker’s meta-textual exercise in deliberately subverting the normative audience expectations conditioned by Hollywood cinema (the protagonists do not win; in fact, they are helpless throughout), thereby making the spectator aware of those conditions. It is a commendable exercise—it tries to expand the possibilities of narrative by making us aware of the redundancy and similarity of so many films by breaking the “rules” that those films usually ascribe to. However, because of this deliberate approach, Funny Games is never more than an exercise—never a “film” in any traditional sense. It does not stand on its own within its narrative framework. It only “works” once the spectator understands its true intention.

Hanake has made clear his disdain for the “rules” that Hollywood (or simply “American cinema”) has created. He sees all his films as attempting to usurp these expectations and thus expand narrative filmmaking as an art form (Code Unknown does this by showing us only the scenes of a series of incidents that do not directly relate to the “narrative”—we see everything between the inciting incidents or “main events”). Yet Funny Games is his only film that seems to aim simply to subvert those expectations, and achieve nothing more. The rest of his canon, by contrast, have self-containing narratives, no matter how unconventional or ambiguous they may be.

Ten years later, Haneke has remade Funny Games shot-for-shot in English, with English-speaking movie stars. While many pondered, baffled, over why the hell Haneke would do this, I was ecstatic by the news of this remake. Haneke, after all, is not making a shot-for-shot remake of an American classic (like Gus Van Sant’s Psycho (1998)), but a film most Americans aren’t familiar with. Haneke has defended his most recent film in the context that the original Funny Games (made in the filmmaker’s home country of Austria) was always an “American” film because it sought to subvert those very expectations created by mainstream American filmmaking. Thus, the Funny Games remake seeks to attract a typical American audience and give them a discomfiting shock to their senses (as I experienced with the first film). In this context, it’s not hard to imagine that, had the new Funny Games opened in wide release to packed houses, and all ticketbuyers left the film an infuriated mob, Haneke would be satisfied that his film achieved exactly what it sought to (and whatever exec at Warner Independent thought this was a good idea obviously has a huge respect for real talent, but s/he is a dumb, dumb businessperson).

And what a perfect time to release this in America! Funny Games aims to interrogate the way in which American spectators equate violence with entertainment, and many critics have pointed out the appropriate timing of imposing such an exercise during the tail end of a surge of nihilistic “torture porn” horror films. In fact, a trailer made for MTV.com edits the original trailer to make the film look like a torture porn horror flick (thereby “tricking” the ideal audience into the theaters) Needless to say, I eagerly awaited the release of Funny Games. I felt like I was inside on the joke, with Haneke, regarding the prank he was about to pull on the audiences, and I could be there, separated from the typical spectator who knows not what they are about to get into, and laugh hysterically alongside the filmmaker as angry audiences left the theaters.

But, as a friend of mine noted, Funny Games opened in limited release, thus sought out the “elite” NY/LA audiences that would most likely already be familiar with Haneke’s work. And in seeing the remake, I realized a glaring contradiction in Haneke’s rebooted exercise: if his intention is to subvert audience expectations by making a shot-for-shot remake of a film that subverted audience expectations, isn’t he merely delivering on exactly what audiences expect? Once one succeeds in changing audience expectations, they cannot be subverted by repeating the process in the exact same way.




Knowing exactly what is going to happen takes all the wind out of Haneke’s exercise. The result loses its shock value, as the alternative now becomes the conditioned norm, and process becomes not weighty and engaging, but redundant and even (I hate to say it) boring. I became numb to it all, as if I were an hour into seeing the gratuitous violence and gore of a torture porn: I just didn’t care anymore, and was biding my time until it was all over with.

If you know what you’re getting into, the film is devalued as a cinematic experience, yet it still carries some intellectual meaning, and I made some observations I didn’t realize the first time I saw the original. Here are some things I felt Haneke was trying to say:

1) When the son receives a bag over his head and the mother is forced to strip naked, yet the son has been allowed to witness the brutally violent attacks on his father and mother, the bag over his head works as an analogy of practices in American film censorship: violence is predetermined as more acceptable than simple nudity. Also, the notion that he needs to be censored from seeing his mother naked as not to disturb him, while he has already been exposed to several scarring incidents involving his family, is absurd; likewise, film censorship (the enforced ratings system) cannot successfully shield youth from potentially offensive material that is pervasive in all other forms of media, notably violence (this concept becomes evident when the son’s death results in blood on the television: the unavoidable nature of pervasive media violence). When the father urges the Paul and Peter to keep their language toned down in front of their son, it comes off as ridiculous, even comical, that the father would be worried of his son hearing some bad words after he has been involved with several incidents that would inevitably scar him for life.

2) The use of music in the film is sparse, but important. We are introduced to the family as they drive to their summer home, and the father and mother listen to classical music and categorize it according to year, composer, composition, etc. But this is interrupted with jarring heavy metal music, the type that goes beyond categorization and normal expectations on what “music” is supposed to be. This introduction is analogous to the film itself, as it aims to structure itself beyond the categorization ascribed to and conditioned by typical mainstream narrative filmmaking (I should note that I was not the first one to make this observation—it comes from separate writings by film scholars Brian Price and Christopher Sharrett). But when Peter (for no apparent reason) plays this same heavy metal music on the stereo as he hunts down the son, it is one of many ways in which Haneke shows that these two villains are in full control of where the narrative goes, as this music shapes the exercise they are taking part in (this becomes most evident when Peter takes a remote and rewinds the very film we have been watching).

Okay, observational tangent over.

I don’t speak German, so the language barrier of the Austrian original was essential to my experience of the first incarnation of this exercise. So when the original Peter addressed the audience, it broke the fourth wall, but did not remove me from the film (as it was intended to do), for the direct address was not as direct because I received it through subtitles. I don’t watch foreign films the same way I watch films of my own country. They often don’t abide by American “rules,” so I approach them with a different set of expectations (Haneke’s assertion that Funny Games was always “American” is now understandable). Thus, I was able to be fully engaged with the original Funny Games throughout, and therefore shocked throughout. But, in English, this direct address did take me out of the film. It made Haneke’s exercise all too evident, and thus I was not able to be engaged—and therefore, not able to be shocked—by it, because I suddenly found myself in the same objective position as Haneke himself (I noticed one minor difference between this and the original film is that Peter did not remind us of the film’s running time—no idea why).

It seems that after creating the deliberate schism between audience and screen, Haneke allows us to once again immerse ourselves in the film when the two young villains temporarily leave. In a hardly forgettable (in both versions) ten-minute long take of father and mother attempting to recuperate and escape while they have time, Haneke uses the uninterrupted nature of this shot to immerse us back into the illusion of reality, or suspense of disbelief, with which we experience most of film. Yet, even in the school of long takes, ten minutes seems mighty excessive for a scene in which relatively little “happens.” Thus, Haneke is not drawing us back into the film, but making us aware of the lack of temporal ellipsis that would normally be the form of such a sequence. Haneke refuses to let the cut spare us from the devastating helplessness and (simultaneously, by contrast) banality of the parents’ recuperation. As this sequence unfolds in real time, the mother and father do not plan a daring escape, but waste most of their brief window of freedom trying to blowdry a drowned cellphone back to life. After several minutes of this, father finally says, “We’re wasting time,” and even this takes on self-referential significance, as Haneke seems to deliberately be wasting the spectator’s time. Although this sequence may be a more “realistic” portrayal of this highly improbable situation, its meaninglessness becomes evident as one of Haneke’s less obvious “games.”

At this point, it felt like Haneke (a director I greatly admire) had taken several steps back in the intellectual (though self-righteous and pretentious) movement forward in his career as a filmic artist. As previously stated, his filmography since the original Funny Games has been a diverse, fascinating array of films that reveal engaging, original narratives while simultaneously subverting and expanding traditional rules of narrative. Cache, in particular, addresses some of the same issues of spectatorship as Funny Games (why do we watch? What are the repercussions of watching?), but through the fully realized, engrossing story of an anonymous spectator who sends the family tapes of uninterrupted surveillance of their house. The film works in two ways: 1) as an engaging, though very unconventional thriller, and 2) as an exercise interrogating the nature of watching movies. Why, then, did Haneke decide to remake a film that only achieves the latter goal?

The best way to subvert audience expectations is to not be so overt about it, to contain it within a seemingly traditional narrative structure. Take No Country for Old Men, for example. It’s not near as radically rebellious as Funny Games, and I argue it benefits for it. No Country seems to be a typical action thriller, a cat-and-mouse game of suspense between hunter and hunted. Yet it gives us an ending (one that infuriated many) that only thematically (not pragmatically) resolves everything that came before—the bad guy gets away, the good guys die (unheroically, off-screen) or retire, and there is relatively little closure.

Likewise, Javier Bardem’s Anton Chigurh is a remarkably similar villain to the young pair of home invaders in Haneke’s film. Chigurh shares with Haneke’s villains a deliberately ambiguous origin, an unclear motivation for the actions that propel the film (or at least a motivation that runs counter to narrative norms and stereotypes—Chigurgh’s “principles” are just as alien as Paul and Peter’s justification for their actions, which makes their lack of clarity clear when they rattle off obviously inapplicable—yet common, even stereotypical—motivations for villains in mainstream films), and, most importantly, an almost superhuman ability to control the events of the narrative. Both these film’s villains leave the narrative similarly as they entered: going about their filmic world with the will and ability to do what they want when they want to, with little obstacle. Chigurh’s casual escape from a devastating car wreck (that, if this were another movie, would have killed him) can be seen as analogous to Paul’s rewinding of Funny Games to revive Peter from a fatal gunshot wound: these villains are not meant to be three-dimensional, flesh-and-blood “characters” in the traditional sense, but symbols executed within the form of character to serve the film’s thematic need and inform the trajectory of the narrative. In a less obvious way, Chigurh is just as responsible for the subversion of narrative expectations in the film’s atypical ending as Paul and Peter are responsible for the same type of subversion throughout Funny Games.

I recently witnessed a close family member watching No Country for the first time, on the edge of his seat with an engagement in the story that I haven’t seen in his character for quite some time. Then he gave a loud, disappointed scoff once the end credits unexpectedly rolled. Another family member asked him, “What happened?” “Nothing!” he exclaimed in disgust. It’s as if this ending prompted him to forget his enjoyment of or engagement with the past two ours—he left the experience as if it never happened.

This is far closer to the reaction Haneke intended to get with the Funny Games remake (and arguably didn’t get): engagement let down by disgust/disappointment at the conclusion, or an angry ticket buyer leaving the theater. While it’s certainly true that many spectators came away from No Country with varying interpretations of/reactions to its controversial ending, these intense reactions have the potential to start a discourse with what we expect from films, thereby potentially expanding our narrative expectations beyond the “rules” dictated by Hollywood, just as much as they have the potential to divide audiences.

So maybe Haneke’s exercise didn’t work because other films are achieving the same end without merely being an “exercise,” including his own.


In an interesting twist, Ron Howard has expressed interest in remaking Haneke’s Cache, and enhancing its generic thriller aspect to appeal to American audiences (and will no doubt have a more conventional ending). I’m sure Haneke reacted to the idea gleefully imagining the Hollywood bastardization of his complex olriginal film that is to come. After all, Haneke wouldn’t have a career in breaking the rules if he didn’t take part in making sure they were still in place.

(Here are a couple of humorous postcards relevant to this post that a friend shared with me—they also take my uber-serious approach to movies down a peg or two.)



On Lists: A Case Against the Oscars

My stance against the Oscars is not exclusive to this one awards show, but the general way in which film is categorized and appreciated in mainstream culture. Too often do we rate or judge films in terms of lists, and how they stack up against one another therein becomes very problematic. When lists are used for diversion on blogs such as these (like top 10 80s John Cusack Movies), they can be an entertaining time-waster. But when such lists go beyond this triviality and self-awareness into the ways we appreciate films at large, there is little left to be desired. That all major critics are expected to release a “Top 10” list each year, and every major media ceremony chooses a “best” from a category of five or so nominees illuminates this preoccupation with lists when evaluating film. Perhaps the most arbitrary lists are the painful Top 100s AFI expands on each year.

What is fundamentally wrong with lists is that by their nature they will inevitably have to exclude something significant. Very few major lists (top 10 of the year, top 100 ever, a category of nominations) exist without a notable exclusion or sacrifice in order to conform to a preconditioned number of options. Yet numbers divisible by 5 (5, 10, 100) are just as arbitrary a number with which to categorize a list than any other. So why is this practice of conforming to the structure of the list more important than potentially excluding information that deserves to belong on that list? If there were eleven great movies this year, why not list those, and in no particular ranking? If there were four great supporting performances, why “fill out” a category with a fifth?

Another problem with lists is that they force meaningless comparison. The victor of an award, or the more prestigious numbered positions on a list, simply hold significance for their designated place among other options, but this delineation does not necessarily prove any sort of superiority amongst the rest. This year’s Best Picture nominees consisted of a wild array of genres and filmic approaches (narrative, stylistic, etc), yet we are expected to judge one as superior above them all. But how does one go about comparing There Will Be Blood to Juno? No Country For Old Men was deemed the best film of the year—but was it the best comedy of the year, or romance of the year, or most complex portrayal of women of the year? Of course not. This movie was good upon the merits it set out to achieve, so how is it that we compare films that seek to achieve totally different responses, affects, and thematic goals? Yet the term “Best Picture” suggests something all-encompassing. Does this mean the other films are somehow inferior? Of course not. Yet the aura of the award or the no. 1 position suggests this.

This is why young people who see Citizen Kane are so often disappointed, because a film promoted as the “No. 1 Best Film of All Time” implicitly succeeds in all areas a film conceivably could. But Citizen Kane clearly doesn’t do this, and instead simply tells its own specific story the best way it could. I don’t understand how one could simply place it in a ranking that states it is slightly better than Casablanca, Gone With the Wind, and The Godfather. Films should be judged on their own merit, not on the merit of other films.

As this year’s Golden Globe “press conference” showed, once the glitz and glamour and falsely entitled sense of importance is stripped away from these award ceremonies, their total lack of value becomes evident. If the writers’ strike had kept the Oscars from happening, the trivial nature of the whole enterprise could have been revealed. Maybe the descending ratings will eventually destroy the need for such a ridiculous ceremony, and then films can be properly evaluated on their own instead of engaging in a political pissing contest in order to achieve some sort of meaningless ranking.

Nothing I’ve said is new. It’s not like the Academy Awards have had a great track record of recognizing works of film that have truly held significance over time. The awards allotted would be far more accurate if the best film of 2007 was named twenty years from now. Maybe if Martin Scorsese or Al Pacino won awards when they actually deserved them, or if Stanley Kubrick, Alfred Hitchcock, or Cary Grant ever won a significant Oscar in their careers, then this arbitrary categorization would have some symbolic semblance of respectability, possibly even reflecting as close as one could get of an active judgment of artistic merit (after all, this is not a science).

Instead, it mirrors the current Democratic race for the presidency in that it becomes not about the quality of those involved, but whoever has the most votes. But in the race to elect one work of art as superior to another, I’ll stay apathetic.

On "Indies": A Case For the Oscars

Even though I’m about a month late, I thought it’d be interesting to make two entries back-to-back that I’ve desired to make for a while: one in favor of the Academy Awards, and one against.

I still find the Oscar ceremony incredibly unnecessary; in fact, it’s an overblown, self-righteous, even hypocritically superficial display of the rich, famous, and beautiful patting themselves on the back (case in point: Al Gore and Leonardo DiCaprio’s announcement at last year’s ceremony that the “Oscars are going green”—but aren’t these award shows some of the most unnecessary and wasteful uses of energy sources?) Furthermore, as most film fans know (and as I will argue in the following entry), the Oscars have a terrible track record. Time shows that they lavish awards on the forgettable and ignore the iconic. That being said, I still get into it, and I haven’t missed a show in ten years.

But perhaps you’ve heard that the 80th Annual Academy Awards was one of the lowest rated ceremonies in recent history. Journalists from Newsweek and Time noted beforehand the increasing trend of the Oscars recognizing independent films more often than high-grossing studio fare—one writer even said the nominees look almost exactly the same as the Independent Spirit Awards. Perhaps you’ve seen this trend in your own home when you’ve watched the broadcast with a friend or family member who exclaimed, “I’ve never heard of any of these movies!” (as if their level of familiarity were some sort of litmus test of legitimization). But the lack of broad awareness of these films outside of middle America has been credited as a large part of the declining popularity of the broadcasts for the past several years—the glitz and glamour can only bring in so many million people.

One journalist argued in an editorial that I read in an airport (and unfortunately couldn’t find online—sorry, guys…it was either in Newsweek or Time) that the Oscars should start a policy of only honoring Hollywood movies (therefore, only the movies most Americans are familiar with) and leave the indies for the other award shows. He harks back to the Oscars of the 80s and 90s, when popular “Hollywood” fare like Chariots of Fire (1981), Terms of Endearment (1983), Out of Africa (1985), Rain Man (1988), The Silence of the Lambs (1991), Forrest Gump (1994), and Titanic (1997) took home the gold (I notice he chooses to omit the comparatively “artsy” and “obscure” The Last Emperor (1987)). He viewed this recent history as a time of consensus between audience and the Academy; the “crowd pleasers” were also the “Award winners”.

Yet this model hasn’t changed radically within the last few years, as many recent Best Picture-winners have also been some of the highest-grossing movies of their respective years. With the recent exceptions of No Country for Old Men and Crash, no Best Picture-winner of the 21st century has made less than $100 million. However, this author most likely speaks not exclusively of the winners, but the heightening presence of “indies” among the overall nominees.

This journalist naively writes as if the independent and studio movies were mutually exclusive (and mistakenly lumps foreign-language films into the “indie” category as well). But most highly recognized “independents” are not truly independent, and this journalist fundamentally ignores the new business model of 21st century studio filmmaking that has had this affect the Academy Awards, in which the dividing line between studios and indies are being increasingly blurred.

1) Most of these “indies” are made by the studios themselves. It is well known that the studios use subsidiary companies under their name to finance and purchase works that appeal to a highbrow or niche market. Paramount Vantage, Fox Searchlight, and Focus Features all had Best Picture nominees this year—none of which were financed independently. Thus, “independent” is more a label than an actual business practice. Take two Best Picture nominees from the past two years from Fox Searchlight—Juno and Little Miss Sunshine. Both opened in limited release and generated buzz to become (supposedly) unexpected sleeper hits; both have ensemble casts of recognizable character actors and even bankable stars (Jennifer Garner, Steve Carrell); and both were marketed as “quirky” indie comedies. Yet Little Miss Sunshine was independently financed and purchased at Sundance—Juno, by contrast, was studio-financed from the purchase of the script throughout casting, production, and marketing. Juno is simply an “indie” by the label tacked on it by the advertising and the logo of the subsidiary company.

2) “Indie” movies aren’t, by their nature, mutually exclusive from audience-pleasing box office success stories. The highest-grossing Best Picture nominee this year was Juno, which even outgrossed many of the studios’ promising franchises (Rush Hour 3, Live Free or Die Hard, Fantastic Four: Rise of the Silver Surfer, Ocean’s Thirteen). The only studio movie to be nominated in the Best Picture category this year was Michael Clayton, which was outgrossed by fellow nominees Atonement, No Country, and Juno. Thus, excluding the Oscar nominees to only movies released by a major studio label does not necessarily mean a greater audience familiarity with the films involved.

3) Indie movies that get recognized by the Oscars are usually “legitimized” by something involved that is familiar or bankable to a potentially large audience. As illustrated with Little Miss Sunshine and Juno, many of these movies have familiar stars. Several recent “indie” nominees, like Babel, Brokeback Mountain, and Good Night, and Good Luck have also benefited from significant star power. Others, like There Will Be Blood or No Country, have been made by filmmakers who have had a consistent presence at the Academy Awards in years past. The Academy hardly ever recognizes the truly independent movies made with no stars, first-time filmmakers, and shoestring budgets (can you imagine a mumblecore film ever getting an Oscar nomination?). Oscar-friendly “indies” are remarkably similar to mainstream fare not only in production process, but in style, content, and bigwigs involved.

4) The studios purposefully release their “prestige” movies through these subsidiaries so the major studio arms can focus on franchises and films with huge audience appeal. If you’re looking for “Oscar-worthy” material solely through films released by major studios in the last 25 years, you’ll see a significant decline in “quality” filmmaking (ie. potential Oscar material). If you look at the top 20 highest-grossing movies this year, nine of them are part of franchises, and five are adapted from a famous book/comic book/TV show with a built-in audience. Unless anybody feels that Transformers or Alvin and the Chipmunks didn’t receive enough Oscar recognition, it’s evident that the division between popular taste and “prestige” taste is the result of the studios segregating their movies with award potential to the subsidiary arm, and the movies with a potential mass audience to the major label. In other words, if Out of Africa or Terms of Endearment were released today, they would have been released through a subsidiary studio and put in limited release to gain award buzz rather than set alongside wide releases of popular films.

The studios seem not to trust their prestige movies to mass audiences anymore, and possibly for good reason: as previously stated, the one film released by a studio to gain significant Oscar attention was Michael Clayton (which many credited as a nostalgic hark back to the days when “smart” films for “adult” audiences were made regularly by studios), a movie that hardly made a peep amongst its opening weekend box office competition. The studios have dictated popular taste and have all but completely excluded it to films with sequel/franchise potential.

So how is this a “defense” of the Oscars? Well, the Academy Awards ceremony is (usually) the second most-watched network television event of the year (behind the Super Bowl), so this ceremony provides a major chance for these “indies” (which often play only in large cities, at least initially) to gain a larger audience—and they often do, as receipts usually go up the week following the broadcast. Though these movies aren’t truly “indies,” this annual pat-on-the-back the movie industry gives itself each year is perhaps the last bastion of hope for prestige entertainment to compete with the studios’ usual output. Sometimes this gives several films that have huge obstacles in reaching a significant American audience to do so—like The Diving Bell and the Butterfly and Persepolis.

So if the networks are complaining that the Oscars aren’t gaining enough viewership because of the increasing marginality of the movies nominated, I have a rational solution: the networks should tell their neighboring studio friends (after all, they are all owned by the same people and operate through the same money flowing between them) to release their prestige fare wide and market them to compete against the typical franchises. Therefore, a) more people will be more familiar with the yearly output movies with award potential, thereby raising the ratings of the next Oscar broadcast, b) studios may see a financial incentive to make “good” films in competition with franchises, and therefore may compete with quality films not just to garner the most awards, but to gain box office receipts as well, and c) if all works out, we all benefit by gaining a better variety of movies indended for a diverse array audiences (niche and mass) at our local multiplex.

Because the idea of the alternative—an Oscar ceremony that only honors major studio work because the “indies” somehow aren’t “worthy” to stand alongside popular entertainment—is simply too much to bear.

And this year’s Best Supporting Actor is…Optimus Prime!