An online collection of links, articles and websites relevant to the teaching of Media and Cinema Studies in the 21st Century. Designed with the needs of the contemporary student in mind, this blog is intended to be a resource for teachers and students of the media alike.
Until recently, the MMA (formerly known as the CLASSICS Act) was looking like the major record labels’ latest grab for perpetual control over twentieth-century culture. The House of Representatives passed a bill that would have given the major labels—the copyright holders for most recorded music before 1972—broad new rights in those recordings, ones lasting all the way until 2067. Copyright in these pre-1972 recordings, already set to last far longer than even the grossly extended copyright terms that apply to other creative works, would a) grow to include a new right to control public performances like digital streaming; b) be backed by copyright’s draconian penalty regime; and c) be without many of the user protections and limitations that apply to other works.
Second, the public found a champion in Senator Ron Wyden, who proposed a better alternative in the ACCESS to Recordings Act. Instead of layering bits of federal copyright law on top of the patchwork of state laws that govern pre-1972 recordings, ACCESS would have brought these recordings completely under federal law, with all of the rights and limitations that apply to other creative works. While that still would have brought them under the long-lasting and otherwise deeply-flawed copyright system we have, at least there would be consistency. Two things changed the narrative. First, a broad swath of affected groups spoke up and demanded to be heard. Tireless efforts by library groups, music libraries, archives, copyright scholars, entrepreneurs, and music fans made sure that the problems with MMA were made known, even after it sailed to near-unanimous passage in the House. You contacted your Senators to let them know the House bill was unacceptable to you, and that made a big difference.
Weeks of negotiation led to this week’s compromise. The new “Classics Protection and Access Act” section of MMA clears away most of the varied and uncertain state laws governing pre-1972 recordings, and in their place applies nearly all of the federal copyright law. Copyright holders—again, mainly record labels—gain a new digital performance right equivalent to the one that already applies to recent recordings streamed over the Internet or satellite radio. But older recordings will also get the full set of public rights and protections that apply to other creative work. Fair use, the first sale doctrine, and protections for libraries and educators will apply explicitly. That’s important, because many state copyright laws—California’s, for example—don’t contain explicit fair use or first sale defences.
The new bill also brings older recordings into the public domain sooner. Recordings made before 1923 will exit from all copyright protection after a 3-year grace period. Recordings made from 1923 to 1956 will enter the public domain over the next several decades. And recordings from 1957 onward will continue under copyright until 2067, as before. These terms are still ridiculously long—up to 110 years from first publication, which is longer than any other U.S. copyright. But our musical heritage will leave the exclusive control of the major record labels sooner than it would have otherwise.
The bill also contains an “orphan works”-style provision that could allow for more use of old recordings even if the rightsholder can’t be found. By filing a notice with the copyright office, anyone can use a pre-1972 recording for non-commercial purposes, after checking first to make sure the recording isn’t in commercial use. The rightsholder then has 90 days to object. And if they do, the potential user can still argue that their use is fair. This provision will be an important test case for solving the broader orphan works problem.
The MMA still has many problems. With the compromise, the bill becomes even more complex, extending to 186 pages. And fundamentally, Congress should not be adding new rights in works created decades ago. Copyright law is about building incentives for new creativity, enriching the public. Adding new rights to old recordings doesn’t create any incentives for new creativity. And copyrights as a whole, including sound recording copyrights, still last for far too long.
Still, this compromise gives us reason for hope. Music fans, non-commercial users, and the broader public have a voice—a voice that was heard—in shaping copyright law as long as legislators will listen and act.
George Orwell’s 1984, Black Mirror S03E01, Psycho Pass, The Orville and many others have all theorised how technology can make our lives better… or worse.
Popular apps for young kids, especially those available in Google’s app store, are teeming with advertisements that distract them from play, manipulate them to make purchases, and extract their personal data.
That’s the conclusion of a new study that’s prompted a slew of child advocacy groups to ask the US federal government to investigate these products. The groups argue that many apps violate the Federal Trade Commission Act by disguising ads, programming characters to lure kids into purchases, or misleading parents into thinking the games are educational.
“What we’re hoping is that the FTC will fine the app developers and fine them enough that it sends a clear message to the preschool app industry,” Josh Golin, executive director of the Campaign for a Commercial-Free Childhood, told BuzzFeed News. His group and 21 others signed a letter sent to the FTC today outlining their concerns, based largely on the new study’s findings.
On TV, ads aimed at kids must follow certain rules. Product placement isn’t allowed, for example, and neither is “host selling,” when a character encourages kids to buy something. But those rules, set by the Federal Communications Commission, don’t apply to the internet. “The FCC wouldn’t touch this,” Golin said. “We have this regulatory vacuum.”
The new study looked at 135 kids apps, a mix of paid and free, iOS and Android, including 96 of the most frequently downloaded in the “Ages 5 and Under” category of the Google Play Store. About one-third were labelled “educational.” Most of the free apps had been downloaded more than 5 million times each, and the paid ones more than 50,000 times.
Almost all — 88% of paid apps and 100% of free ones — contained at least one type of advertising, the study found, such as pop-up ads, banner ads, in-app purchases, and commercial characters.
Banner ads sometimes showed content that’s inappropriate for kids, the researchers said, such as a Health Living Today ad for “10 Bipolar Facts to Learn: Search Treatments.” Other ads were for apps like Pocket Politics, a game that shows a cartoon of President Trump wanting to press a “nukes” button, and FastLand, a car shooting game. Both of these app ads played a demonstration video before they could be closed.
For Golin, one of the most disturbing examples was Doctor Kids, which shows a character crying if you don’t click on an in-app purchase. “Children form real attachments to these characters,” he said. “For a kid, that’s a pretty powerful thing to express, when a character is crying.” (Doctor Kids’ maker, Bubadu, did not respond to a request for comment.)
Nine of the kids' apps contained what the researchers call “camouflaged” ads, which are made to look like part of the game but bring up a video ad when clicked. On the My Talking Tom app, for example — which has had more than 500 million installs, according to Google Play — kids will see a present drop down from the ceiling. If they tap on it, they’ll be prompted to “watch videos and win.” (The maker of My Talking Tom, Outfit7, did not respond to a request for comment.)
In Builder Game, which has more than 10 million installs on Google Play, thought bubbles pop up over characters telling the child what to do. Sometimes, the study found, the bubbles led to games that could only be played after watching an ad. (Builder Game’s creator, also Bubadu, did not respond to a request for comment.)
The leader of the new study, pediatrician Jenny Radesky of the University of Michigan, remembers one morning last winter when she observed her then 8-year-old son playing an app called Masha and the Bear Vet Clinic, in which he tried to help remove thorns from a sick wolf. After watching an ad video, the game gave him a tweezer that made it easier to get the thorns out and accumulate candy rewards.
“I asked him, ‘Why are you willing to watch an ad video just to do that?’ He said, ‘I get candy,’” Radesky told BuzzFeed News. (The owner of Masha and the Bear, Animaccord, did not respond to a request for comment.)
Her son is like most kids his age or younger, she said, who don’t have the critical thinking skills to understand the “persuasive intent” of an advertisement — that the apps want you to watch the ads because they financially benefit. “That sort of stuff was really hard for him to understand.”
Past studies have shown that even brief exposures to ads embedded in cartoons and other media can influence children's brand preferences, noted Tom Robinson, professor of pediatrics at Stanford University. It's "disheartening," he said, "that so many app makers are willing to use such insidious methods that so obviously take advantage of children’s vulnerabilities."
Most of the public furore over screen time, both in academic studies and the popular press, has focused on the amount of time that kids use apps, with kids under 5 averaging about one hour per day with mobile devices. But researchers are beginning to recognize that what kids are seeing and doing with technology is just as, if not more important than how long they’re doing it. (Radesky, for example, is not anti-app: For patients who struggle with temper tantrums, she recommends watching Daniel Tiger’s Neighborhood apps, which she says can “teach both parent and child what to do in a moment of stress.”)
In 2016, Radesky helped write the latest American Academy of Pediatrics guidelines for kids and screens. Although the guidelines were less restrictive than the previous version, when it came to the subject of advertisements they drew a hard line, saying that advertisements in kids apps should be eliminated. “It’s not ethical because they don’t understand it. They’re just going to click on it,” Radesky said.
Another big concern of kids apps is data privacy. Although the Children’s Online Privacy Protection Act (COPPA) limits how much personal information can be collected and tracked from kids under 13, thousands of apps distributed by Google may violate the rule, according to a report published earlier this year. Six apps analyzed in the new study requested users’ location information, a potential violation of COPPA.
“It’s a race to the bottom right now with a lot of these preschool apps,” Golin said. “Their whole goal is to get higher in the Google Play Store ranking.”
Platforms like Google and Apple have a gatekeeping role to play, he and Radesky agreed. Apple, for instance, doesn’t allow apps to be listed in the “Kids” category of its iOS App Store if they have in-app purchases (unless they are behind a parental gate), or if they serve ads based on the user’s activity (although they can still serve ads).
Perhaps it’s not surprising that Apple, which built a business around fancy devices and curated services, would have app rules that could hurt its advertising revenue. Google, on the other hand, is in the ad business.
In an emailed statement, a Google spokesperson said that Google Play apps primarily directed at children must participate in its “Designed for Families” program. They must adhere to COPPA rules and certain ad and content restrictions. “Additionally, Google Play discloses whether an app has advertising or in-app purchases, so parents can make informed decisions.” (One of its kid-specific rules, for example, forbids showing ads that could be mistaken for app content — which seems to have been violated by some of the apps flagged in the new study.)
Radesky hopes platforms like Google and Apple will do more. “If they could just put the good stuff up top, that would be awesome.”
In the last few years, we’ve discovered just how much trust - whether we like it or not - we have all been obliged to place in modern technology. Third-party software, of unknown composition and security, runs on everything around us: from the phones we carry around, to the smart devices with microphones and cameras in our homes and offices, to voting machines, to critical infrastructure. The insecurity of much of that technology, and increasingly discomforting motives of the tech giants that control it from afar, has rightly shaken many of us.
But the latest challenge to our collective security comes not from Facebook or Google or Russian hackers or Cambridge Analytica: it comes from the Australian government. Their new proposed “Access and Assistance” bill would require the operators of all of that technology to comply with broad and secret government orders, free from liability, and hidden from independent oversight. Software could be rewritten to spy on end-users; websites re-engineered to deliver spyware. Our technology would have to serve two masters: their customers, and what a broad array of Australian government departments decides are the “interests of Australia’s national security.” Australia would not be the last to demand these powers: a long line of countries are waiting to demand the same kind of “assistance.”
In fact, Australia is not the first nation to think of granting itself such powers, even in the West. In 2016, the British government took advantage of the country’s political chaos at the time to push through, largely untouched, the first post-Snowden law that expanded not contracted Western domestic spying powers. At the time, EFF warned of its dangers —- particularly orders called “technical capability notices”, which could allow the UK to demand modifications to tech companies’ hardware, software, and services to deliver spyware or place backdoors in secure communications systems. These notices would remain secret from the public.
Last year we predicted that the other members of Five Eyes (the intelligence-sharing coalition of Canada, New Zealand, Australia, the United Kingdom, and the United States) might take the UK law as a template for their own proposals, and that Britain “… will certainly be joined by Australia” in proposing IPA-like powers.
That’s now happened. This month, in the midst of a similar period of domestic political chaos, the Australian government introduced their proposal for the “Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018.” The bill unashamedly lifts its terminology and intent from the British law.
But if the Australian law has taken elements of the British bill, it has also whittled them into a far sharper tool. The UK bill created a hodge-podge of new powers; Australia’s bill recognizes the key new powers in the IPA and has zeroed in on their key abilities: those of assistance and access.
If this bill passes, Australia will - like the UK - be able to demand complete assistance in conducting surveillance and planting spyware, from a vast slice of the Internet tech sector and beyond. Rather than having to come up with ways to undermine the increasing security of the Net, Australia can now simply demand that the creators or maintainers of that technology re-engineer it as they ask.
It’s worth underlining here just how sweeping such a power is. To give one example: our smartphones are a mass of sensors. They have microphones and cameras, GPS locators, fingerprint and facial scanners. The behaviour of those sensors is only loosely tied to what their user interfaces tell us.
Australia seeks to give its law enforcement, border and intelligence services, the power to order the creators and maintainers of those tools to do “acts and things” to protect “the interests of Australia’s national security, the interests of Australia’s foreign relations or the interests of Australia’s national economic well-being”.
The “acts and things” are largely unspecified - but they include enabling surveillance, hacking into computers, and remotely pulling data from private computers and public networks.
The range of people who would have to secretly comply with these orders is vast. The orders can be served on any “designated communications provider”, which includes telcos and ISPs, but is also defined to include a “person [who] develops, supplies or updates software used, for use, or likely to be used, in connection with: (a) a listed carriage service; or (b) an electronic service that has one or more end users in Australia”; or a “person [who] manufactures or supplies customer equipment for use, or likely to be used, in Australia”.
Examples of electronic services may “include websites and chat fora, secure messaging applications, hosting services including cloud and web hosting, peer-to-peer sharing platforms and email distribution lists, and others.”
You can see the full list in the draft bill in section 317C, page 16.
As Mark Nottingham, co-chair of the IETF’s HTTP group and member of the Internet Architecture Board, notes, this seems to include “Everyone who’s ever written an app or hosted a Web site - worldwide, since one Australian user is the trigger - is a potential recipient, whether they’re a multimillion dollar company or a hobbyist.” It includes Debian ftpmasters, and Linux developers; Mozilla or Microsoft; certificate authorities like Let’s Encrypt, or DNS providers.
There are some signs that the companies affected by these orders have learned the lesson of the IPA, and pushed back during the Assistance and Access’s preliminary stages. Unlike the UK bill, there are clauses forbidding Australia from being required to “implement or build [a] systemic weakness or systemic vulnerability into a form of electronic protection” (S.317ZG); and preventing actions in some cases that would cause material loss to others lawfully using a targeted computer (e.g. S.199 (3), pg 163. Companies have an opportunity to be paid for their troubles, and billing departments can’t be targeted. There is some attempt to prevent government agencies forcing providers to “make false or misleading statements or engage in dishonest conduct”(S.317E).
But these are tiny exceptions in a sea of permissions, and easily circumvented. You may not have to make false statements, but if you “disclose information”, the penalty is five years’ imprisonment (S.317ZF). What is a “systemic weakness” is determined entirely by the government. There is no independent judicial oversight. Even counselling an ISP or telco to not comply with an assistance or capability order is a civil offence.
If the passage of the UK surveillance law is any guide, Australian officials will insist that while the language is broad, no harm is intended, and the more reasonable, narrower interpretations were meant. But none of those protestations will result in amendments to the law: because Australia, like Britain, wants the luxury of broad, and secret powers. There will be - and can be no true oversight - and the kind of malpractice we have seen in the surveillance programs of the U.S. and U.K. intelligence services will spread to Australia’s law enforcement. Trust and security in the Australian corner of the Internet will diminish - and other countries will follow the lead of the anglophone nations in demanding full and secret control over the technology, the personal data, and the individual innovators of the Internet.
“The government,” says Australia’s Department of Home Affairs web site, “welcomes your feedback” on the bill. Comments are due by September 10th. If you are affected by this law - and you almost certainly are - you should read the bill, and write to the Australian government to rethink this disastrous proposal. We need more trust and security in the future of the Internet, not less. This is a bill that will breed digital distrust, and undermine the security of us all.