SIMON ROBERT COTTON
  • Home
  • Curriculum Vitae
  • Research
  • Teaching
  • Blog
  • Follow

The role of international law in the decline of war

2/11/2018

 
Note: This post originally appeared on the international relations blog, Duck of Minerva.
Picture
Much of the commentary on Oona Hathaway and Scott Shapiro’s recent book, The Internationalists, including at Duck of Minerva, has focused on the empirical basis for their controversial thesis. Hathaway and Shapiro do not just claim that much of the decline in major interstate war that we have seen since the Second World War is down to mere reformulation of black-letter law, but that the Kellogg-Briand Pact of 1928, which appeared an embarrassment in its immediate aftermath, was pivotal to this transformation.

It is unsurprising, then, that political scientists have taken issue with their claim. In contrast, The Internationalists’ philosophical presuppositions have attracted less attention. This is a pity, because this work represents an invaluable opportunity to demonstrate the practical relevance of philosophy of law, an area that hard-headed social scientists are apt to dismiss.

Writing at Foreign Policy, prominent realist Stephen Walt argues that it is more likely that shifting incentives have been responsible for the decline in war. In particular, he claims that nuclear weapons have raised the security costs of war, trade and financial integration have raised its economic costs, and democracy has raised its electoral costs. As such, wars that would previously have passed a cost-benefit analysis have failed to do so at an increasingly rate since 1945.

On the face of it, Walt appears to favor shifting incentives at the expensive of law for methodological reasons. He says that if ‘changing norms’ were driving the decline in war we ‘should be able to point to numerous cases where national leaders had a clear incentive to expand their territory… and then decided not to go ahead… because they believed such an act was inherently wrong.’ Yet, Walt claims ‘as one reads the book, one searches in vain for direct evidence of this sort.’ Here, Walt is of course appealing to what Harry Eckstein termed the ‘crucial-case’ method. A crucial case is a one in which, if the outcome is observed (in this case, a war averted), then it cannot but be due to the factor hypothesized, as all other factors were absent.

There are problems with this criticism. First, why isn’t the burden on exponents of shifting incentives to point to crucial cases? How many cases are there of war being thought obligatory (consider an accountable government guarding the perceived entitlements of its citizens) and yet leaders not going ahead because of disincentives? Second, even were there no such case we could not conclude that law was impotent. Perhaps law remains sufficient to prevent war, it is just that law and incentives have tended, historically, to move together. Most importantly, however, law is most effective when it works by shifting incentives, as Hathaway and Shapiro point out in their response to Walt. As such, were they to offer the case Walt describes, they would vacate the factor behind their explanation rather than merely those behind competitor hypotheses.

It might thus be that Walt is really hung up on a problem that realists have thought insurmountable since Hobbes’s Leviathan. How could law shift incentives? Without a common government to enforce compliance, what good are mere words? In their reply to Walt, Hathaway and Shapiro seek to undercut this concern by offering a hypothetical. Specifically, they claim that it is doubtful the US would have invaded Mexico for nonpayment of debts in 1846 had it been illegal to do so. For one thing, resources, including those that come with territory, are less valuable to the extent that they are not in demand, and, they suggest, demand is dampened when resources are tainted by lack of legal ownership. What reason would the US have had to seize valueless resources?

Without elaboration, this response might be thought to beg the question. Why, wouldn’t the US have been so able to trade the resources it forcibly took from Mexico without a global government to punish purchasers for dealing in stolen goods? Of course, it is entirely possible that the latter would have been hesitant to purchase those resources if they had thought that they couldn’t sell them on. But this just pushes the question back a step. Where it is not already the case that countries are hesitant to buy resources seized in war because of a reasonable fear of not being able to sell them on, how could publicly designating such goods ‘stolen’ by reformulating the law change matters?

In his philosophical work Legality, Shapiro suggests an answer to this question by drawing an analogy between a set of laws and a plan. Just as having a plan can enable a group to achieve outcomes that would otherwise be infeasible—despite a plan being also, as it were, mere words—so too can law. (Indeed, Shapiro takes it that a legal system is a particular kind of instantiated plan.) For illustration, it’s entirely possible that at least some states in the 1840s would have been prepared to boycott resources seized through war provided that doing so would actually accomplish something. More relevantly, it’s entirely possible that, had only some states done so, other less-principled powers would have had a selfish reason to follow suit. Yet, without a legal prohibition to provide the role of a focal point, and around which states could coordinate, were any one state to boycott it would forgo benefits for, potentially, nothing. After all, unilateral boycotts accomplish little where markets are competitive.

​None of this is meant to imply that the thesis Hathaway and Shapiro offer in The Internationalists is not vulnerable on empirical grounds. It may well be. But it should not be dismissed on the mistaken assumption that it posits something inconceivable, an assumption it is all too easy to fall into without an appreciation of the nature of law.

"Taxpayers' rights!" cannot justify private school subsidies

10/13/2016

0 Comments

 
In recent weeks, state support for private education has been in the news. Journalists have revealed that some Australian private schools receive government funds far in excess of the levels proposed by the Gonski Review, the 2011 inquiry which recommended that, at least as far as the allocation of new funds was concerned, a greater emphasis should be placed on overall school need regardless of sector.

But should the state be in the business of subsidising private schools at all? For radical opponents of subsidies, the answer appears obvious. Public funding for private schools is a contradiction in terms. This is overly simplistic, however. Even if, semantically, we wanted to restrict the word “private” to schools that receive no public money whatsoever—and used some other term to refer to those schools that are independently governed yet receive state funds—we would still be left with the question of whether such funding was warranted.

More significantly, proponents of subsidies have recourse to an ostensibly powerful argument, one that opponents seldom address directly.

I’m not referring here to the claim that private schools save the government money by educating children who would otherwise have to be accommodated by an already overstretched public sector, and are hence owed money as compensation. You may as well as say that the private sector owes the government money by virtue of the fact that public schools save the former money by educating students who the state would otherwise be compelled to ensure private schools educate.

Besides, even if the obligation to ensure that each and every child is appropriately educated did not also fall on the shoulders of private school parents, there is no general duty to pay others for saving you money provided that doing so was an unintended side-effect of their pursuing their own projects. After all, where no sacrifice is made, no compensation is owed. And it is not as if, in educating their pupils, private schools are aiming to lift the burden on the public sector.

Rather, I’m referring to the claim that, as private school parents pay their taxes too, their children are owed a share of public expenditure on education, if not the full share allocated to public school children.

I call this argument ostensibly powerful only because of its practical influence. In reality, it is extremely weak. We do not normally think that, just because you pay your taxes, you are entitled to at least some share of each and every particular line item of government expenditure. If anything, you are entitled to at least some share of each item in some mix of such expenditures, with the composition of that mix depending, to some extent, on need. More to the point, school spending is spending on children, not adults, and thus the notion that private school subsidies might be justified on the basis of tax contribution is entirely misplaced. Children don’t pay taxes.

Most importantly, however, for private school parents to pay their taxes too either entitles their children to a full share of public funding or, potentially at least, no share at all, and the former possibility is incredible.

Were private school students to receive the same government funds as public school students—on the basis that their parents pay no less in taxes, all being equal, than public school parents—then they would be in receipt of unfair advantages. Why? Because each and every one of them would have more spent on their education in total than each and every public school student, which would make a mockery of substantive equality of opportunity, perhaps our most broadly-endorsed political ideal. There is undoubtedly, then, an upward limit on public funding for private schools.

As long as the government is constrained by a requirement not to exacerbate inequalities in opportunity, though, there is no downward limit on public funding either, despite the fact that private school parents pay their taxes too. Where, or as long as, private schools out-perform public schools, government funding should flow to the latter in order not to widen that gap, and private schools are not entitled to any public support as long as it persists.

This does not mean, of course, that there is some absolute prohibition on government funding for private schools. Far from it. Where private schools are among the worst-performing, they too should be prioritised. But the mere fact that private school parents pay their taxes does not, in itself, give us reason to be blind to school and student need.
0 Comments

Australia needs to prioritise tackling inequality

8/31/2016

0 Comments

 
With the opening of Australia’s 45th parliament, the new Coalition government has begun to articulate its economic agenda in somewhat greater depth. Unfortunately, there is already cause for concern. If early signs are any indication, the government has not yet got to grips with the ramifications of growing inequality and is overly concerned with public debt.

The Treasurer’s first significant speech was seemingly short on specifics and long on contradictions. In this light, its chief sound bite—that Australia is facing a growing divide, once benefits are factored in, between the “taxed and the taxed-nots”—assumes a greater significance. And it is worrying one. To suggest that is necessarily unfair for people to be net beneficiaries of the tax and transfer system over their lifetime as a whole is to disavow one of government’s fundamental responsibilities, which is to spread risks—whether of unemployment, low pay, or ill health—across the population as whole. As a matter of fact, it would be the rare nation indeed that was able to discharge this duty without some people being net beneficiaries of government in this sense. Of course, it is possible for the state to be overly zealous in pursuit of this goal, or for an equitable tax and transfer system to be undermined by citizens’ attempts to evade their responsibilities or claim more than they are owed.  But evidence in the Australian case suggests the contrary. Growing inequality is depriving those at the bottom of the means to be included in the “taxed”—a group they would be better off a part of but cannot access. As a result, “reigning [sic] in the growth of welfare spending”, as the Treasurer has proposed, would compound unfairness rather than diminish it.

The other worry is that the government continues to use household budget metaphors to reference the need to tackle the budget deficit and government debt. This may be an attempt to relate the problem to the average voter. Nevertheless, it overstates it. There is generally less reason to be concerned about government debt than household debt for three reasons. First, the government has the capacity to boost economic activity (at least when the economy is below its existing capacity, and perfectly good productive resources are being underutilised as a consequence of an coordination failure). As such, it has the power to increase its ability to meet the real burden of its debt by spending, as greater economic activity means greater tax revenue. Second, governments of monetarily sovereign countries (like Australia, but unlike Greece) need never default on their debt as such because they can always create the money needed to pay it off by fiat. Third, and perhaps most importantly, a proportion of a nation’s public debt is standardly owed to its own citizens. To that extent, it is money we owe ourselves, and there is no comparison to “loading more and more debt on the shoulders of our children and grandchildren” or “living effectively on the credit card of the generations that come after us”. True, future generations stand to lose if we fail to pay down our debt now to the extent that ours is a government “of the people”, but they also stand to gain in their role as private creditors to the extent that that debt is maintained.

Needless to say, none of this represents an unfettered license for the Australian government to borrow and spend. The danger of inflation, and the fact that a majority of Australia's public debt is now foreign owned, represent real, if rather remote, constraints. But the default imperative is not, as generally in the household case, to eliminate as much debt as we can as quickly as we can. Rather, the focus should be on tackling inequality before it begins to have the kinds of implications we have already seen in Britain and the US.
0 Comments

Common misconceptions about the UBI

8/5/2016

0 Comments

 
In the July edition of Prospect, Jon Cruddas and Tom Kibasi argue that a universal basic income (UBI) remains “a bad idea”. Part of their complaint is that it is unrealistic. However, it may not be as unrealistic as they presume, particularly if a number of widespread and hence influential confusions—confusions that are perpetuated in the article (and other popular treatments, see Eduardo Porter’s NYT op-ed)—can be ironed out. Here I address four.
 
1. According to C&K, a problem with the UBI is that it would “discourage work” and hence “it would be difficult to see a state [that paid a UBI] as being very fruitful”, particularly given that the “New Labour nirvana”—in which we are able to return to the pro-poor growth of the post-war decades via investing in human capital—has failed to materialise.
 
Nobody doubts that, were a UBI to be introduced, some people would quit their jobs and, as a consequence, GDP would decline (or, more likely, fail to increase at so great a rate, at least for a period). However, it is not at all obvious that this would be a problem. First, it is a commonplace that GDP is a poor measure of total economic value—particularly given externalities—much less aggregate welfare. If introducing a UBI would free up people to do non-market work that is not currently included in our national accounts (but ought to be), it may well be a very good thing.
 
More importantly, GDP is distribution-insensitive. What exactly would be the complaint if, as a consequence of introducing a UBI, consumer luxuries were somewhat more expensive provided that the worst-off amongst us were better off overall than they otherwise would have been by virtue of not having to work in the least-fulfilling and worst-paid jobs?
 
Of course, anyone would admit cause for concern if the disincentive to work entailed by a UBI were so strong that even the worst-off would be harmed were it to be introduced—as in the extreme case in which the economy contracts so much that the level of the universal income is limited out of necessity. However, a hunch is inadequate to determine the prospects of such an outcome. Are the authors aware of an experiment or model to buttress their position?
 
2. C&K argue that work “improves lives, and brings dignity”, whereas sitting back, relaxing, and tuning in “to daytime television” does not.
 
Undoubtedly so, but the argument for a UBI is about what means are permissible in promoting the valuable goal of ensuring that people do what is best for them, and work. If C&K wouldn’t be prepared, as I assume they would not, to force somewhat who would sincerely prefer to watch television into “work” by means of threats, why are they so comfortable with people being effectively forced into work by fear of destitution? Isn’t a bit of rational persuasion and cultural pressure more appropriate here?  The other points to note are these: (a) it is unlikely that, having received a UBI, people would just sit and watch television, because to do so would, as the authors admit, be “antithetical to the values of most British people, who believe in the value of work”, and (b) paid employment acquired through the labour market does not, needless to say, exhaust the possible range of dignity-conferring work (consider childcare, community organizing, etc.).
 
3. According to C&K, the supporters of a UBI are hopelessly “utopian” in that they rely on “individuals, having received a universal income… [taking] it upon themselves to do socially useful things”.
 
I don’t think there are any supporters of the UBI who believe that a substantial number of employees in unfulfilling but socially useful (necessary?) jobs would, once provided with a UBI, spontaneously continue in those jobs out of the goodness of their hearts. But neither is this a problem (or necessarily a problem—see above). A market economy with a UBI no more relies on altruism to get socially necessary work done than does a market economy without a UBI. In either type of economy, socially useful work is elicited by means of a wage-incentive.
 
4. C&K claim that “embedded in the idea of universal basic income is the assumption that some jobs are worthwhile, and others not. Some may sneer at “McJobs”—but cleaners in McDonald’s stop infections, just as cleaners in NHS hospitals do. We all enjoy clean streets and benefit from well-stocked shops. Proponents of UBI ignore the value of such work: it only makes sense to be emancipated from an obligation that is inherently undesirable.”
 
Of the four claims considered here, this is the one that most misses the mark. (And like the previous confusion, it arises because the authors fail to consider the consistency of a UBI with, and the consequences of a UBI for, wages.) One of the strongest arguments for a UBI is that some of the most valuable work is also the worst paid (and hence, too, the lowest status) precisely because the people who undertake that work have no alternative way of supporting themselves, and hence limited bargaining power in the labour market. In providing them with an alternative, in other words, the UBI would put upward pressure on wage rates, bringing remuneration more in line with social contribution.
0 Comments

    Author

    I'm a political theorist interested in applied ethics and political economy.

    Archives

    February 2018
    October 2016
    August 2016

    Categories

    All
    Debt
    Education
    Fair Play
    Inequality
    UBI

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Curriculum Vitae
  • Research
  • Teaching
  • Blog
  • Follow