Friday, January 30, 2009
Those interested in digital copyright policy might be interested in the UK's Department of Culture, Media and Sport's 'Digital Britain' Interim Report, which was released this week.I like the idea that the Government might not just be about maintaining the status quo. I often feel that the 'majority opinion' concept is ignored (not only in the field of copyright).
Section 3.2 seems particularly relevant:
'There is a clear and unambiguous distinction between the legal and illegal sharing of content which we must urgently address. But, we need to do so in a way that recognises that when there is very widespread behaviour and social acceptability of such behaviour that is at odds with the rules, then the rules, the business models that the rules have underpinned and the behaviour itself may all need to change.'
It also recommends the creation of a Rights Agency to:
'bring industry together to agree how to provide incentives for legal use of copyright material; work together to prevent unlawful use by consumers which infringes civil copyright law; and enable technical copyright-support solutions that work for both consumers and content creators. The Government also welcomes other suggestions on how these objectives should be achieved.'
Wednesday, November 26, 2008
Some of the earlier questions are oriented towards content creators. I answered 'not applicable' to a lot of them. I thought the question that asks you to define non-commercial use was interesting. I'll share mine in the comments on this post, and I encourage you to do the same (so don't read the comments until you've done the questionaire!).
As previously announced, Creative Commons is studying how people understand the term “noncommercial use”. At this stage of research, we are reaching out to the Creative Commons community and to anyone else interested in public copyright licenses – would you please take a few minutes to participate in our study by responding to this questionnaire? Your response will be anonymous – we won’t collect any personal information that could reveal your identity.
Because we want to reach as many people as possible, this is an open access poll, meaning the survey is open to anyone who chooses to respond. We hope you will help us publicize the poll by reposting this announcement and forwarding this link to others you think might be interested. The questionnaire will remain online through December 7 or until we are overwhelmed with responses — so please let us hear from you soon!
Questions about the study or this poll may be sent to firstname.lastname@example.org.
Friday, November 21, 2008
"The Delicious CopyRightNews account has been running for almost 18 months now, and has been growing bigger by the month. Not sure if I am finding more articles or copyright is becoming a more discussed topic!" This is a digesting service for mostly Australian copyright material, a useful example of social tagging to share access to a wide range of emerging commentary from diverse sources. It is apparently run by Vanessa Tuckfield, who is doing a review of it at present.
Labels: guest post
Thursday, October 09, 2008
For readers interested in reading more on copyright's early history (1500-1800) and unable to spend a month or two at Kew public records, there is a great article by H. Tomas Gomez Arostegui, 'What History Teaches Us About Copyright Injunctions and the Inadequate-Remedy-at-Law Requirement', 81 S. Cal. L. Rev. 1197 (2008).
Tomas has also started compiling a website where he includes an Appendix of Copyright Infringement Suits & Actions From c. 1560 to 1800, and pdfs of some of the cases. More cases to follow as he expands the resource. You can find this all here.
It is a really impressive project.
Thursday, June 12, 2008
Some months ago, I participated in a public panel session run by a
university department. It was video-recorded. The university
requested from all participants a copyright release for
"non-commercial educational purposes - for example, a teaching
resource for undergraduate and postgraduate students, and public
access to short excerpts via our web site". All participants signed
that release. So far so good.
Today, I was approached by the university in question to sign a much
more substantial 'Presenter's Deed of Consent'. It included "a
non-exclusive, royalty free, worldwide, irrevocable, and perpetual
licence to Exploit", including right to sub-licence. There was no
mention of constraint to non-commercial uses only, i.e. it authorises
commercial uses. (Added to that, "The Presenter is not entitled to
claim a fee or royalty for the use of the Recording by the University
or its sub-licensees").
The reason for this dreadful, old-fashioned proprietorial form of
consent transpired to be that the University is participating in the
about-to-be-launched iTunes U Oz version.
iTunes is an Apple product and service to provide paid access to
music, etc. iTunes U is an extension that enables universities to
put material up on that site:
Again, so far so good, because multiple channels for discovery of and
access to content is a good thing.
Here comes the rub.
The iTunes conditions appear to preclude the University from making
material placed on iTunes U subject to an open content licence. It
appears that the conditions apply not only to the version available
through iTunes, but also to versions available through other channels.
(Note: I briefly looked on the iTunes site for information on the
conditions and licences, but had little success).
That would mean that anything that a university makes available
through iTunes is locked-down and proprietised. And the new breed of
profit-oriented universities will find that just too tempting, and
will seek to 'extract rents' as economists are wont to say, or
'charge serious money' as the rest of us do.
Unless and until the iTunes U conditions are found to be different
from what I fear (or they are changed), content-producers who want
their materials to be openly available need to refuse permission for
them to be made available through that channel.
Dr Roger Clarke
Thursday, August 16, 2007
At a recent meeting at DEST about the RQF there were some comments made about problems with the creation of the digital repository where all the research material related to submissions made by Universities is to be stored. We were told they were still engaged in discussions and negotiations with publishers over rights.
This got me thinking- Well, rather than engage in discussions with publishers about this arguably non-commercial use of copyright works produced in the Higher ed sector, why not put the energy into drafting a copyright exception? After all, the Act is already full of them in relation to education and simplicity of the Act can hardly be a current goal. I guess there would be issues of retrospectivity for the past RQF period, but not for the future iterations. More law reform might be a more productive use of time and also be in the public interest.
But then I got to thinking- what rights do these publishers supposedly have?
Clearly they have rights to the typesetting of work they publish, but I doubt they are being so specific.
Putting aside completely the question of whether the author or their institutional employer actually own the works in the first place, most authors have book contracts with copyright provisions that are duly signed. But I have written a few book chapters for a number of local and international publishers and never once been asked to assign those rights. In terms of journals, I have only been asked once to sign a copyright agreement, and that one I altered to retain some rights. Most of my colleagues have never signed agreements in relation to the majority of their book chapters and journal articles. Doesn’t case law say that in these circumstances where there is no written agreement the publisher owns no more than a bare licence? If so, why is DEST talking to them about our work and presumably going to pay royalties for the privilege of having access to a copy of them for RQF purposes, taken from the budget that could otherwise go to other more important uses in our underfunded sector?
Labels: guest post
Thursday, July 19, 2007
House of Commons housemeister Abi reported a dreadful attack on the rights of Harry Potter fans last week as they waited for the latest and perhaps last instalment. Once hastily scanned pages of the much-awaited new Potter book were posted on a web site, taken down (oops, too late), and distributed via file-sharing as reported here, a legion of nasty little devils have been spoiling everyone's fun by up-loading snippets in the form "
The unreleased novel's plot secrets were effectively broadcast by these cheeky killjoys to deliberately defuse the drama and anticipation for readers everywhere. So now, beyond the routinely-claimed commercial effects of digital piracy, should we add a new offence to the litany of intellectual property crimes: intentionally undermining the pleasure of the text?
Strange new network effects have also appeared due to the ubiquity of these posts, which suddenly seemed to be everywhere on the Internet. Legions of expectant Potter readers (you know who you are), on hearing of these outrages realised that there was, for the last week, probably nowhere safe to go on the net: no way to be online without the risk of being inadvertently infected with one of these guerrilla memes, indiscriminate cluster-bombs of premature revelation peppering the online landscape, little plot viruses that leach away motivation for reading the final instalment if you are accidentally exposed to them.
An unintended consequence of this new social form of Internet 'malware' may be that, in order to preserve your blissful ignorance of key plot twists until at last you have the tome in your sweaty paws, fans may be reluctantly obliged to seek temporary safe haven in real work offline, rather than risking accidental exposure tooling about on the net 'for research' as usual.
Don't go reading the paper though -- just when we were sure this pathetic plot-rot was the work of bratty 12-year-olds, along comes Sydney's Sun-Herald to restore our faith in the Fourth Estate and Old Media's race to the bottom: there, on page 10, in upside-down back-to-front writing you can only read in the mirror, were four of the (ex-)secrets.
(OK, admittedly this was after the witching hour of publication -- but the modus operandi was the same as the net culprits', curse them all.)
Wednesday, May 23, 2007
In March 2007, the Free Software Foundation (FSF) released the third draft version three of its General Public License [sic] (the “GPL”). There will be another “last call draft” prior to the licence text being settled. A final draft is scheduled for July 2007. The window for comments on this draft closes (on my calculation) on 27 May (US time) – so if you are interested in commenting now is your chance.
This version updates the previous GPL v2, which dates from 1991. To put this in context in 1991 the world did not have: Windows 95 or any later version (Windows 3.0 was released in 1990); the Internet (science departments in universities did); the World Wide Web; peer to peer file sharing; CD or DVD writers; Google or broadband. The technology landscape has changed substantially since 1991. In addition, we didn't have the TRIPS Agreement, the US did not have the DMCA, and Australia did not have the Australia-US Free Trade Agreement. Not only has the technology changed, the law itself has changed since the licence was last updated. GPL v3 is intended to update the licence to address some of these changes.
The wording of GPLv3 is determined by the FSF in consultation with the broader free software community. The FSF has engaged the Software Freedom Law Centre and, through them, other lawyers, to assist in the drafting (disclosure: the SFLC engaged me in relation to some aspects of GPL v3). The FSF has a public website through which anyone can provide comments. The FSF also maintains a number of committees who have reviewed and commented on the drafts. While difficult to prove, I hazard it would be fair to say that GPL v3 has been subject to wider public input than any other licence in the history of the world.
The main aims of the new draft are fundamentally the same as those of GPL v2. That is, to protect users' freedom (“Our primary concern remains, as it has been from the beginning, to give users freedom that they can rely on” - from original process definition document). This draft has attempted to be pragmatic while being consistent with this aim. This emphasis on practicality can be seen in, for example, the limiting of some obligations to “User Products” – that is, the obligations only apply in circumstances where the drafters expect that there will be an inequality of bargaining power between the supplier and the acquirer.
Some of the issues that GPL v3 addresses include “Tivo-isation”, where a device manufacturer locks down a device so that, while the user has the theoretical freedom to modify the device, they do not have the practical ability to; TPM/anti-circumvention legislation, that is, that free software should not be suborned to defeat others' freedom; software patents; and internationalisation.
The final issue to be addressed by GPL v3, and the reason this draft has been delayed, is the use of software patents to undermine free software. In November of 2006, Novell and Microsoft announced a deal under which both Microsoft and Novell agreed not to sue each other's users for patent infringement. The FSF announced it would be extending the drafting deadline to specifically address this patent agreement. The overarching scenario is set out by Eben Moglen in a talk he gave at the recent Red Hat Summit (references below).
The GPL v3 has harnessed the tendency (at least prior to the decision of the High Court in Stevens v Sony) of the courts to give overly expansive readings to the scope of copyright law. GPL v3 is drafted to implicitly adapt to differing interpretations as to the scope of the law. The new terms it introduces (“propagate” and “convey”) now attempt to cover the field of rights and do so in a way which is jurisdiction neutral (in my view, some term better than “propagate” could probably have been found). It is on these expansive readings (eg of the authorisation right) which will permit GPL v 3 to have a broad reach.
By and large draft 3 of GPL v 3 has been well received, with many former critics warming (or, at least not objecting) to the revised wording. While there is some criticism of the text, I have not noticed any of substance against this draft to date. That said, they have failed to rectify my pet annoyance with the licence – each of its paragraphs should be numbered.
Moglen video: http://www.youtube.com/watch?v=6YExl9ojclo
Transcript (apparently) here:
Labels: guest post
Tuesday, April 24, 2007
Wednesday 11 July – Thursday 12 July @ Surfers Paradise Marriott Resort and Spa, Gold Coast,
The conference promises to be an exciting and engaging forum for researchers, technologists, and educators with interests and expertise in e-Research who recognise the need to remain current in this rapidly advancing field.
With vast change to the global research sector due to advances in information and communications technology (ICT) e-Research now supports all disciplines from the sciences to humanities.
This conference will examine legal issues facing e-Research both in
International Keynote Speakers will include:
- John Wilbanks, Executive Director of Science Commons
- Dr Michael Spence, Head of the Social Sciences Division of the
- Paul Uhlir, Director of International Scientific and Technical (S&T) Information Programs at The National Academies in
- Claire Driscoll, Director of Technology Transfer for the National Human Genome Research Institute (NHGRI, NIH),
- Dr Chris Greer, Program Director Office of Cyberinfrastructure, National Science Foundation (NSF),
, VA; and Arlington
- Special guest speaker Professor Fiona Stanley AC, Director Australian Telethon Institute for Child Health Research, Executive Director Australian Research Alliance for Children and Youth and Australian of the Year (2003).
For more information and online registration go to www.e-Research.law.qut.edu.au.
The Legal Framework for e-Research project is funded by the Australian Commonwealth Department of Education, Science and Training (DEST), under the Research Information Infrastructure Framework of Australian Higher Education, as part of the Commonwealth Government’s Backing Australia’s Ability – An Innovation Action Plan for the Future (BAA) report.
Monday, April 16, 2007
SCRIPT-ed, the peer reviewed online journal at the University of Edinburgh Faculty of Law, has published a Special Issue ‘Creating Commons’(Volume 4, Issue 1, March 2007). It’s based on papers presented at the Conference 'Creating Commons: The Tasks Ahead in Unlocking IP', held at the University of New South Wales, Sydney, on 10-11 July 2006. The ‘Unlocking IP’ project, funded by the Australian Research Council, investigates the rapidly changing relationship between public and private rights in Australian copyright law and practice. It explores options for maximising the ‘unlocking’ of the potential uses of copyright works through sharing and trade in works involving public rights (open content, open source and open standards licensing) and through enhancement to the public domain. The papers in this Special Issue address all four main aspects of the project (i) theories and taxonomy of public rights (Greenleaf); (ii) voluntary licences and their consistency, simplicity, and effectiveness (Bond, Coates); (iii) technical issues in finding works with public rights more effectively (Bildstein); and (iv) incentives to expand public use rights (Clarke) and requirements to protect them (de Zwart). Nicol’s paper deals with aspects of all four topics in relation to patent regimes and biotechnology, whereas the focus of the other papers is on copyright. One common theme in most papers is the national dimension of commons, the question of to what extent commons are created by and situated in the copyright regimes, institutions and practices (including licences) of particular countries. Is the ‘Australian commons’ significantly different in its features than the ‘Scottish commons’, or are both now largely homogenised in an US-flavoured international commons stew?
No surprises that voluntary ‘commons licences’ are the main focus of the Special Issue, so let’s start there. Ben Bildstein in ‘Finding and Quantifying Australia’s Online Commons’ asks some new questions: ‘how are public rights in fact being expressed in the online commons?’, and its converse ‘how can you find works that are part of Australia’s online commons, using current tools?’. He gives us a snapshot of the ‘Australian online commons’ in 2006, stratified by licence types, a baseline study for a longitudinal analysis of the ‘down under’ bit of the commons over the next few years. Watch <http://www.unlockingip.org/> for developments.
Jessica Coates (‘Creative Commons – The Next Generation: Creative Commons licence use five years on’) provides an overview and analysis of the practical application of the Creative Commons licences five years after their launch. She takes a more qualitative approach to analysis of changes in licence use over time, who is using which licences, and their likely motivations for doing so. These licence use trends, she argues, help to rebut arguments that Creative Commons is a movement of academics and hobbyists, and has no value for traditional organisations or working artists.
More questions from Catherine Bond, who asks in ‘Simplification and Consistency in Australian Public Rights Licences’ how voluntary licences can be further simplified to increase both usage and ease of use? She suggests that this could occur through drafting a longer version for potential licensors and a short version for licensees, with simpler language the goal of both. She also questions whether consistency between licences is important, concluding that while it may be desirable and feasible on a national level, but ideological differences may prevent its achievement at the international level.
In the final paper on commons licences, Roger Clarke rejects the application of conventional ‘scarce resource’ economics to content (‘Business Models to Support Content Commons’), and argues that more appropriate forms of economic analysis show the critical role that accessibility to information plays in the process of innovation. He identifies a range of suitable business models for open content to demonstrate that the content commons is sustainable and appropriate for profit-oriented enterprises.
Every country’s constitution is different when it comes to the question of protecting commons against the copyright maximalists. Melissa de Zwart (‘The Future Of Fair Dealing In Australia: Protecting Freedom Of Communication’) concludes that Australia’s judicially articulated implied constitutional guarantee of freedom of political communication is too narrow to act as a control upon copyight law. However the doctrine of fair dealing encompasses elements of freedom of communication and provides some scope for the recognition of such rights under Australian law.
In ‘Creating commons by friendly appropriation’ I argue that the operation of Internet-wide search engines such as Google illustrate an unusual method of creating an intellectual commons, which I call ‘friendly appropriation’. I suggest eight conditions conducive to the successful creation of commons by friendly appropriation, and give some examples of other situations either side of the line. These commons may be rare but are hardly insignificant: a fully-developed theory of intellectual commons needs to recognise them.
Diane Nicol’s ‘Cooperative Intellectual Property in Biotechnology’ rounds off the Special Issue by reminding us that commons are not only about copyright. She explores the range of legal options for dealing with some of perceived problems with the exclusive rights model of patent management in biotechnology. She sets out alternative co-operative approaches including open access models to show their many parallels to issues concerning copyright and commons.
You can watch the Unlocking IP project unfold on its web pages and more entertainingly on the project researchers’ blog ‘The House of Commons’. You’re standing in it.
Labels: guest post
Wednesday, March 28, 2007
Queensland University of Technology and Sydney University Press have announced the publication of a new collection of papers on open access, Open Content Licensing: Cultivating the Creative Commons.
Edited by Professor Brian Fitzgerald, Open Content Licensing: Cultivating the Creative Commons brings together papers from some of the most prominent thinkers of our time on the internet, law and the importance of open content licensing in the digital age. Drawing on material presented at the Queensland University of Technology conference of the same name in January 2005, the text provides a snapshot of the thoughts of over 30 Australian and international experts – including Professor Lawrence Lessig, Futurist Richard Neville and the Hon Justice Ronald Sackville – on topics surrounding the international Creative Commons, from the landmark Eldred v Ashcroft copyright term decision to the legalities of digital sampling in a remix world. It also provides case studies of a number of Australian-based open access projects, including AESharenet and the Youth Internet Radio Network, and a detailed section on policy and law relating to computer games.
In line with the book’s theme, both the hardcopy and the electronic version have been published under a Creative Commons Attribution-Noncommercial-No Derivatives licence.
Hardcopies can be ordered from the Sydney University Press here, while a PDF of the entire work can be downloaded for free from the QUT e-Prints Archive here. Individual chapters are also available for free electronic download here.
For more information on the book and its contents, visit the Creative Commons Australia page on this new work here.
Labels: guest post
Monday, February 12, 2007
Michael Geist has put into words what many were thinking, in 'Vista's Fine Print Raises Red Flag'.
This explores legal, privacy and technical issues with Windows Vista validation, the new re-balancing of user rights in the End User Licence Agreement (you have less right to control your computer than you thought), and the implementation of functionality reduction (down sampling HD video resolution unless using the HDMI or similar DRM-aware video interface) at the behest of the MPAA.
The comments are interesting too, many succinctly setting out issues relevant to malware research - shows that these issues are accessible to a general audience, and that it is possible to encapsulate them simply:
"I have just read about Sony BMG and the FTC ruling that states that the action of installing DRM onto consumers machines without their knowledge is indeed illegal. It appears that Microsoft is doing exactly the same thing, but using the EULA to make it legal."
Labels: guest post
Thursday, November 16, 2006
If the Copyright Amendment Bill 2006 is rushed through Parliament, then Australians will be the not-so-proud owners of a complex and inflexible copyright regime that's out-of-date the day it becomes law. Despite what seemed like good intentions, the government has delivered a 200 page mess of changes, with no "fair use" flexibility that U.S. consumers and innovators depend on.
"What's most disturbing is that this Bill is so demonstrably anti-innovation, and for no good reason. U.S. fair use has given businesses like Google, iTunes and YouTube enough room to explore new business models without being suffocated at birth by outdated copyright laws. Without fair use, the next great Internet company is unlikely to come out of Australia," said ADA Chairman, Jamie Wodetzki.
"Even worse, this Bill risks making ordinary Australians criminals, in some cases where they don't even know they're breaking copyright law."
The ADA has called upon the Federal Government to embrace a flexible defence of fair use to ensure that Australia’s copyright laws are credible, relevant, and timely for consumers and technology developers alike.
The Senate Committee on Legal and Constitutional Affairs published its report on the Bill, which was introduced into Parliament on 18 October and is due to become law in December.
Despite the short time-frame the Committee was required to report within, the Committee recognised a number of serious flaws in the proposed legislation and made a number of recommendations in line with the ADA's concerns.
Amongst the Committee's recommendations, it recommended that ordinary uses by consumers of digital music players be rendered legitimate, that copying for preservation purposes in educational and cultural institutions be legitimised, that the criminal offences provisions be re-drafted to ensure that activities of ordinary Australians and legitimate businesses are not caught, and that contracting out of the exceptions to the TPM scheme be prohibited.
Labor and Democrats Senators recommended deferral of the Bill. In recognising the serious flaws in the consultative process, Labor Senators noted:
"The extremely complex nature of the issues coupled with the extremely short time-frame set by the Government for the inquiry, seriously hampered the Committee in its efforts to comprehensively consider and report on all the evidence before it".Whilst there is no doubt that an overhaul of copyright legislation is much required, unless the Government heeds the recommendations of this report and allows further consultation in relation to the very complex provisions of this Bill, it will almost certainly fail in its stated aim of bringing our laws in line with rapidly changing technological realities.
This was also released as a statement on behalf of the Australian Digital Alliance (ADA) on 14 November 2006
The ADA is a coalition of public and private sector interests formed to promote balanced copyright law. ADA members include universities, software companies, libraries, schools, museums, galleries and individuals.
Labels: guest post
Tuesday, November 14, 2006
OVER THE past decade, there have been a number of inquiries into the defence of fair dealing under Australian copyright law.
The current Australian Copyright Act 1968 has a defence of fair dealing, which provides protection against claims of copyright infringement. The defence is limited to particular purposes, such as research and study, criticism and review, reporting the news and use for judicial proceedings. The defence of fair dealing has been questioned for lacking clarity and certainty famously, judges of the Federal Court of Australia could not agree on whether The Panel's use of Channel Nine segments constituted a fair dealing in particular cases. The defence has also failed to keep up with technological and cultural developments.
At long last, the Attorney-General, Philip Ruddock, has introduced the hefty 219-page Copyright Amendment Bill 2006 into Parliament, declaring, ''The Government is committed to dealing with these challenges to copyright head-on, while seeking to also acknowledge the opportunities technology presents. We want laws in place which mean copyright pirates are penalised for flouting the law whilst ordinary consumers are not infringing the law through everyday use of copyright products they have legitimately purchased.'' The legislation has three main features. It introduces a range of miscellaneous exceptions to copyright infringement, provides for the stronger protection of digital copy protection and access codes demanded by the Australia-United States Free Trade Agreement, and will provide a wider array of civil and criminal remedies for copyright owners.
Unfortunately, the Bill will not meet its laudable objectives. The legislation is not ''net neutral'' thus it will apply to particular digital technologies in highly specific ways. It is also drafted in a highly convoluted way which will make it difficult for judges and lawyers to understand let alone everyday consumers, technology developers, and capitalists. It will certainly provide copyright owners with a wide range of civil and criminal remedies in respect of infringement of economic rights and circumvention of ''technological protection measures'', but it does not fix the manifold problems with the defence of fair dealing.
Instead, the Federal Government has created a range of nugatory miscellaneous exceptions, which are narrowly tailored to particular subject matter and technology, and specific purposes and activities. There is limited scope for recording broadcasts for replaying at a more convenient time (time- shifting). There is also a range of technology- specific provisions which allow for the reproduction of copyright material in a different format for private use. In particular, the legislation allows for format-shifting of sound recordings (in other words, space-shifting). However, some commentators have suggested that the tightly worded provisions would not currently cover the everyday use of an iPod or other MP3 player. Moreover, the podcasting and webcasting of radio broadcasts and similar programs has been excluded from the scope of such an exception.
The Government has also proposed a strange, catch-all provision which deals with various miscellaneous provisions. This clause deals with the use of copyright material for certain residual purposes such as non- commercial use by libraries, archives, and educational institutions, and for satire and parody. Such activities are subject to the so- called three-step test under World Trade Organisation regulations. This clause looks unworkable. Consider a cartoonist using a copyright work for instance, wittily depicting Labor Senator Stephen Conroy as a Dalek. The parodist would have to demonstrate that such a use was a special case, that the use did not conflict with the normal exploitation of the copyright work, and that the use did not unreasonably prejudice the interests of a copyright owner. The cartoonist would also have to engage in an interpretation of three-step test under international trade law (which will be difficult, given that there has been only one inconclusive WTO panel judgment on the subject). Such preconditions seem somewhat excessive hurdles for a satirist to have to jump.
Such nugatory, miscellaneous copyright exceptions are a poor substitute to the open- ended, flexible defence of fair use in the United States. The Supreme Court of the US has described the defence of fair use as ''the guarantee of breathing space for new expression within the confines of copyright law'' and called the defence ''an engine of free expression''. Not only does the fair use defence cover particular purposes such as criticism, comment, news reporting, teaching, scholarship and research, the US courts have held that it embraces such activities as time-shifting and space-shifting, parody and transformative uses, reverse engineering, and the use of thumbnail images in search engines. The doctrine provides a much wider safe harbour than that offered by the Government's Bill. The Bill needs reform in terms of form and content to meet academic Paul Goldstein's triple bottom line of ''brevity, simplicity and fairness'' and to ensure consumers, libraries, educational institutions, and technology developers enjoy the same freedoms and liberties enjoyed by their US counterparts.
Dr Rimmer is a senior lecturer at the ANU College of Law.
This guest post was first published as a guest opinion in the Canberra Times, 13 November 2006
See the Senate Report on the Copyright Amendment Bill 2006 (just released)
Media commentary on the Copyright Amendment Bill
- Barclay, P. "Piracy, Consumers and the Digital Age", Australia Talks Back, ABC Radio National, 15 November 2006, featuring the Attorney-General Philip Ruddock, Melissa de Zwart, David Brennan and Matthew Rimmer.
- Murray. L. "Soon Recordings Will Be A Crime", The Sydney Morning Herald, 14 November 2006.
- Funnell, A. "Gauging the Great Google Game Plan", the Media Report, ABC Radio National, 9 November 2006.
- Guest, A. "Copyright Law Changes Face Criticism", PM, ABC, 7 November 2006.
- Skatssoon, J. "New Laws May Cripple 'Online' Searching", ABC Science Online, 7 November 2006.
Labels: guest post
Monday, November 13, 2006
14 November 2006 is World Usability Day
(Sydney event: http://www.worldusabilityday.org/event/show/163)
“Every citizen on our planet deserves the right to usable products and services. It is time we reframe our work and look at a bigger global picture. The time is right, the interest is here. 'User friendly' is a common and understandable term; people understand that the world should work well. Now, we have to encourage them to take the message to the streets and say, "We will not stand for it anymore, we want our world to be usable.”
“No more excuses, no more managers complaining about budgets and schedules. No more marketing people selling functionality and power that is more than we need. No more consumers buying things we cannot or do not need to use."
This global day of celebration of usability (one of my favourite risk management tools) invites a speculation about usability and our domain of open content licences or free/open source software licences.
- Who are the 'users' of these licences: both authors and 'consumers'? Intermediaries? Institutions? Aggregators? Integrators? Editors? Later authors?
- What is important to these groups in being able to use a suitable licence without unnecessary fuss or confusion?
- What features of specific licences, or the licence model, helps or hinders such use?
We could also ponder the significance of 'open standards' for the development and adoption of intuitive common interfaces or document models. Such commonality and predictability is important for supporting learning and intuitive guessing about how similar things work: you can learn one system and then guess how others work.
- Are existing licences standardised and predictable enough, or are they monuments to the idiosyncratic individual "creativity" of their authors? Is this a fair question? Are 'standards' possible yet, or at all?
- Are they compatible enough between various alternate models/licences/ versions, or a jumble of incompatible systems like in the old days of IT and networks?
See also the current GPL v3 discussions, including inter-version compatibility http://www.cyberlawcentre.org/2006/gpl/resources.htm
This prompts consideration of the negative impact of increasing complexity of the range of licence options (both between licences, and among options for a given licence); and the potential difficulty for neophytes in interpreting how the specific provisions of particular licences will actually operate in the real world (and hence, whether a given licence will do what they hope, or raise other problems).
Users must struggle with a double hit of complexity: in both the law (copyright law is notoriously complex and perverse!?) and technology (a magic black box for many people who just want to get on with it).
It can thus be hard for mere mortals to 'grok' (comprehensively understand) how a given open licence and its technology or content interact.
- Is there anything that could be done to reduce this steep learning curve, this barrier to wider adoption and use?
Finally, it is worth distinguishing between "User-centred design" and "Usability Evaluation" approaches. Both rely on going back to the actual users, not relying on your own guesses. But "Usability evaluation" is more common, and more limited - it can be done any time, and here lies the trap. In practice it is often done at the end of a project, seen merely as part of testing. But this is usually too late, since it is by then too expensive to fix fundamental or conceptual bugs. "User-centred design" however is much preferable, as it starts with the early design stage and goes through, giving feedback at every stage. Done right, it can catch the fundamental errors early enough to change or dump the plan.
Applied to open licences, this might encourage early resort to real world research into what the users actually think, want and need, and what gets in their way, rather than further expert elaboration based on received assumptions from earlier rounds of these licences. Is anyone doing this?
Labels: guest post
Wednesday, November 08, 2006
Wikipedia is the most prominent of the new-age collaborative information sources. But even its champions acknowledge that there are challenges, and choices to be made.
Larry Sanger, one of Wikipedia's co-founders, has long been dissatisfied with some aspects of its management. He announced on 17 October 2006 his intention to spawn a fork, or republished version, of Wikipedia that is intended to progressively develop higher-quality, more reliable articles.
Sanger envisages the core difference about Citizendium as being a set of editors, with interleaved scope, who will take responsibility for approving articles and amendments to articles. There will be rules that are rather less loose than Wikipedia's (e.g. contributors must declare their 'real names' - whatever that means), 'constables' who will enforce the rules, and a process for appointing and controlling editors and constables. Sanger intends that the appointment process will have collaborative features, but the proposal at this stage is sketchy.
The essence of the debate is whether and how to quality-assure the content of collaborative information sources. The orthodoxy within the open movement is the 'many eyes' principle: errors will come to attention and be addressed, because of the sheer volume of people who are looking and who are empowered to do something about them. The risk of pollution is high, and anarchy looms; but believers say it can be avoided.
Some people are nervous about pollution and anarchy, and uncomfortable with constructive looseness. They prefer layers of controls, and trust in a few rather than trust in the 'great unwashed hordes'. They point to the increasing incidence of Wikipedia pages being frozen for short periods, to let tempers cool. (As this was being written, the Wikipedia entry for 'Wikipedia' was locked, with the explanation "Because of recent vandalism or other disruption, editing of this article by unregistered or newly registered users is currently disabled. Such users may discuss changes, request unprotection, or create an account.").
The distinctions between the two approaches might be seen this way:
|QA Principle||'Many eyes'||'A few good men'|
|QA Style||Open collaboration among many||An inner clique of guardians, possibly self-perpetuating|
|QA Process||Informal review, by genuine 'peers' as in 'equals'||Formal review, by an approved set of 'peers' as in 'peers of the realm'?|
|Editorial Style||Self-organising and/or Anarchic||Hierarchical command and control, but with a collaborative appointment process?|
There are many aspects of Citizendium that cast doubt on its ability to survive any longer than its predecessor Nupedia, let alone thrive. Will the elite few prove to be as energetic as the egalitarian hordes? Will the bureaucracy of editorial committees cause even the first few score pages to miss their window of opportunity? Will any of the pages ever score high enough on Google rankings to be noticed? Will the quality difference matter to people, or will the 'good enough' of Wikipedia trump the new approach, just as Microsoft's Encarta, by using some of Funk & Wagnall's middle-brow encyclopaedia, trumped Britannica? Will the inevitable re-branding as something trendier like 'Zendi' be enough to revive interest?
Ultimately, the community will vote with its feet, or consumers will determine what the market wants by paying with their clicks and eyeballs (choose your preferred metaphor). Perhaps the venture's greatest contribution will be to help us learn about quality assurance of open content.
[This was a guest post, written by Roger Clarke. It is available from Roger Clarke's website under either an AEShareNet licence or a Creative Commons licence. -- Ben]