Friday, December 29, 2017

2017: Not So Prime

Mathematicians call 2017 a prime year because 2017 has no prime factors other than 1 and 2017. Those crazy number theorists.

I try to write at least one post here per month. I managed two in January. One of them raged at a Trump executive order that compelled federal libraries to rat on their users. Update: Trump is still president.  The second pointed out that Google had implemented cookie-like user tracking on previously un-tracked static resources like Google Fonts, jQuery, and Angular. Update: Google is still user-tracking these resources.

For me, the highlight of January was marching in Atlanta's March for Social Justice and Women with a group of librarians.  Our chant: "Read, resist, librarians are pissed!"



In February, I wrote about how to minimize the privacy impact of using Google AnalyticsUpdate: Many libraries and publishers use Google Analytics without minimizing privacy impact.

In March, I bemoaned the intense user tracking that scholarly journals force on their readersUpdate: Some journals have switched to HTTPS (good) but still let advertisers track every click their readers make.

I ran my first-ever half-marathon!



In April, I invented CC-licensed "clickstream poetry" to battle the practice of ISPs selling my clickstream.  Update: I sold an individual license to my poem!

Science March NYC 2017I dressed up as the "Trump Resistor" for the Science March in New York City. For a brief moment I trended on Twitter. As a character in Times Square, I was more popular than the Naked Cowboy!

In May, I tried to explain Readium's "lightweight DRM"Update: No one really cares - DRM is a fig-leaf anyway.

In June, I wrote about digital advertising and how it has eviscerated privacy in digital libraries.  Update: No one really cares - as long as PII is not involved.

I took on the administration of the free-programming-books repo on GitHub.  At almost 100,000 stars, it's the 2nd most popular repo on all of GitHub, and it amazes me. If you can get 1,000 contributors working together towards a common goal, you can accomplish almost anything!

In July, I wrote that works "ascend" into the public domain. Update: I'm told that Saint Peter  has been reading the ascending-next-monday-but-not-in-the-US "Every Man Dies Alone

I went to Sweden, hiked up a mountain in Lappland, and saw many reindeer.



In August, I described how the National Library of Medicine lets Google connect Pubmed usage to Doubleclick advertising profilesUpdate: the National Library of Medicine still lets Google connect Pubmed usage to Doubleclick advertising profiles.

In September, I described how user interface changes in Chrome would force many publishers to switch to HTTPS to avoid shame and embarassment.  Update: Publishers such as Elsevier, Springer and Proquest switched services to HTTPS, avoiding some shame and embarrassment.

I began to mentor two groups of computer-science seniors from Stevens Institute of Technology, working on projects for Unglue.it and Gitenberg. They are a breath of fresh air!

In October, I wrote about new ideas for improving user experience in ebook reading systemsUpdate: Not all book startups have died.

In November, I wrote about how the Supreme Court might squash out an improvement to the patent system. Update: no ruling yet.

I ran a second half marathon!


In December, I'm writing this summary. Update: I've finished writing it.

On the bright side, we won't have another prime year until 2027. 2018 is twice a prime year. That hasn't happened since 1994, the year Yahoo was launched and the year I made my first web page!

Sunday, November 26, 2017

Inter Partes Review is Improving the Patent System

Today (Monday, November 27), the Supreme Court is hearing a case, Oil States Energy Services, LLC v. Greene’s Energy Group, LLC, that seeks to end a newish  procedure called inter partes review (IPR). The arguments in Oil States will likely focus on arcane constitutional principles and crusty precedents from the Privy Council of England; go read the SCOTUSblog overview if that sort of thing interests you. Whatever the arguments, if the Court decides against IPR proceedings, it will be a big win for patent trolls, so it's worth understanding what these proceedings are and how they are changing the patent system. I've testified as an expert witness in some IPR proceedings, so I've had a front row seat for this battle for technology and innovation.

A bit of background: the inter partes review was introduced by the "America Invents Act" of 2011,  which was the first major update of the US patent system since the dawn of the internet. To understand how it works, you first have to understand some of the existing patent system's perverse incentives.

When an inventor brings an idea to a patent attorney, the attorney will draft a set of "claims" describing the invention. The claims are worded as broadly as possible, often using incomprehensable language. If the invention was a clever shelving system for color-coded magazines, the invention might be titled "System and apparatus for optical wavelength keyed information retrieval". This makes it difficult for the patent examiner to find "prior art" that would render the idea unpatentable. The broad language is designed to prevent a copycat from evading the core patent claims via trivial modifications.

The examination proceeds like this: The patent examiner typically rejects the broadest claims, citing some prior art. The inventor's attorney then narrows the patent claims to exclude prior art cited by the examiner, and the process repeats itself until the patent office runs out of objections. The inventor ends up with a patent, the attorney runs up the billable hours, and the examiner has whittled the patent down to something reasonable.

As technology has become more complicated and the number of patents has increased, this examination process breaks down. Patents with very broad claims slip through, often because the addition of the internet means that prior art was either un-patented or unrecognized because of obsolete terminology. These bad patents are bought up by "non-practicing entities" or "patent trolls" who extort royalty payments from companies unwilling or unable to challenge the patents. The old system for challenging patents didn't allow the challengers to participate in the reexamination. So the patent system needed a better way to correct the inevitable mistakes in patent issuance.

In an inter partes review, the challenger participates in the challenge. The first step in drafting a petition is proposing a "claim construction". For example. if the patent claims "an alphanumeric database key allowing the retrieval of information-package subject indications", the challenger might "construct" the claim as "a call number in a library catalog", and point out that call numbers in library catalogs predated the patent by several decades. The patent owner might respond that the patent was never meant to cover call numbers in library catalog. (Ironically,  in an infringement suit, the same patent owner might have pointed to the broad language of the claim asserting that of course the patent applies to call numbers in library catalogs!) The administrative judge would then have the option of accepting the challenger's construction and open the claim to invalidation, or accepting the patent owner's construction, and letting the patent stand (but with the patent owner having agreed to a narrow claim construction!)
Disposition of IPR Petitions in the first 5 years. From USPTO.

In the 5 years that IPR proceedings have been available, 1,153 patents have been completely invalidated and 287 others have had some claims cancelled. 331 patents that have been challenged have been found to be completely valid. (See this statistical summary.) This is a tiny percentage of patents; it's likely that only the worst patents have been challenged; in the same period, about one and a half million patents have been granted.

It was hoped that the IPR process would be more efficient and less costly than the old process; I don't know if this has been true but patent litigation is still very costly. At least in the cases I worked on had correct outcomes.

Some companies in the technology space have been using the IPR process to oppose the patent trolls. One notable effort has been Cloudflare's Project Jengo. Full disclosure: They sent me a T-shirt!


Update (November 28): Read Adam Liptak's news story about the argument at the New York Times
  • Apparently Justices Gorsuch and Roberts were worried about patent property being taken away by administrative proceedings. This seems odd to me, since in the case of bad patents, the initial grant of a patent amounts to a taking of property away from the public, including companies who rely on prior art to assure their right to use public property.
  • Some news stories are characterizing the IPR process as lopsided against patent owners. (Reuters: "In about 1,800 final decisions up to October, the agency’s patent board canceled all or part of a patent around 80 percent of the time.") Apparently the news media has difficulty with sampling bias - given the expense of an IPR filing, of course only the worst of the worst patents are being challenged; more than 99.9% of patents are untouched by challenges!
Update (April 24, 2018): In a 7-2 decision, with Justices Gorsuch and Roberts dissenting, the Supreme Court upheld the constitutionality of inter partes review. The decision is here.

Sunday, October 29, 2017

Turning the page on ereader pagination

Why bother paginating an ebook? Modern websites encourage you to "keep on swiping" but if you talk to people who read ebooks, they rather like pages. I'll classify their reasons into "backward looking" and "practical".

Backward looking reasons that readers like pagination
  • pages evoke the experience of print books
  • a tap to turn a page is easier than swiping
Practical reasons that readers like pagination
  • pages divide reading into easier to deal with chunks
  • turning the page gives you a feeling of achievement
  • the thickness of the turned pages help the reader measure progress
Reasons that pagination sucks
  • sentences are chopped in half
  • paragraphs are chopped in half
  • figures and such are sundered from their context
  • footnotes are ... OMG footnotes!
How would you design a long-form reading experience for computer screens if you weren't tied to pagination? Despite the entrenchment of Amazon and iPhones, people haven't stopped taking fresh looks at the reading experience.

Taeyoon Choi and his collaborators at the School for Poetic Computation recently unveiled their "artistic intervention" into the experience of reading. (Choi and a partner founded the Manhattan-based school in 2013 to help artists learn and apply technology.) You can try it out at http://poeticcomputation.info/



On viewing the first chapter, you immediately see two visual cues that some artistry is afoot. On the right side, you see something that looks like a stack of pages. On the left is some conventional-looking text, and to its right is a some shrunken text. Click on the shrunken text to expand references for the now shrunken main text. This conception of long form text as existing in two streams seems much more elegant than the usual pop-up presentation of references and footnotes in ebook readers. Illustrations appear in both streams, and when you swipe one stream up or down, the other stream moves with it.

The experience of the poetic computation reader on a smartphone adapts to the smaller screen. One or other of the two streams is always off-screen, and little arrows, rather than shrunken images indicate the other's existence.

 * * *

On larger screens, something very odd happens when you swipe down a bit. You get to the end of the "page". And then it starts moving the WRONG way, sideways instead of up and down. Keep swiping, and you've advanced the page! The first time this happened, I found it really annoying. But then, it started to make sense. "Pages" in the Poetic Computation Reader are intentional, not random breaks imposed by the size of the readers screen and the selected typeface. The reader gets a sense of achievement, along with an indication of progress.

In retrospect, this is a completely obvious thing to do. In fact, authors have been inserting intentional breaks into books since forever. Typesetters call these breaks "asterisms" after the asterisks that are used to denote them. They look rather stupid in conventional ebooks. Turning asterisms into text-breaking animations is a really good idea. Go forth and implement them, ye ebook-folx!

On a smart phone, Poetic Computation Reader ignores the "page breaks" and omits the page edges. Perhaps a zoom animation and a thickened border would work.

Also, check out the super-slider on the right edge. Try to resist sliding it up and down a couple of times. You can't!

 * * *

Another interesting take on the reading experience is provided by Slate, the documentation software written by Robert Lord. On a desktop browser, Slate also presents text in parallel streams. The center stream can be thought of as the main text. On the left is the hierarchical outline (i.e. a table of contents), on the right is example code. I like the way you can scroll either the outline or the text stream and the other stream follows. The outline expands and contracts accordion-style as you scroll, resulting in effortless navigation. But Slate uses a responsive design framework, so on a smartphone, the side streams reconfigure into inline figures or slide-aways.

"Clojure by Example", generated by Slate.

There are no "pages" in Slate. Instead, the animated outline is always aware of where you are and indicates your progress. The outline is a small improvement on the static outline produced by documentation generators like Sphinx, but the difference in navigability and usability is huge.

As standardization and corporate hegemony seem to be ossifying digital reading experiences elsewhere,  independent experiments and projects like these give me hope that a next generation of ebooks will put some new wind in the sails of our digital reading journey.

Notes:
  1. The collaborators on the Poetic Computation Reader include Molly Kleiman, Shannon Mattern, Taeyoon Choi and HAWRAF. Also, these footnotes are awkward.


Monday, September 11, 2017

Prepare Now for Topical Storm Chrome 62

Sometime in October, probably the week of October 17th, version 62 of Google's Chrome web browser will be declared "stable". When that happens, users of Chrome will get their software updated to version 62 when they restart.

One of the small but important changes that will occur is that many websites that have not implemented HTTPS to secure their communications will be marked in a subtle way as "Not Secure". When such a website presents a web form, typing into the form will change the appearance of the website URL. Here's what it will look like:

Unfortunately, many libraries, and the vendors and publishers that serve them, have not yet implemented HTTPS, so many library users that type into search boxes will start seeing the words "Not Secure" and may be alarmed.

What's going to happen? Here's what I HOPE happens:
  • Libraries, Vendors, and Publishers that have been working on switching their websites for the past two years (because usually it's a lot more work than just pushing a button) are motivated to fix the last few problems, turn on their secure connections, and redirect all their web traffic through their secure servers before October 17.
          So instead of this:

           ... users will see this:

  • Library management and staff will be prepared to answer questions about the few remaining problems that occur. The internet is not a secure place, and Chrome's subtle indicator is just a reminder not to type in sensitive information, like passwords, personal names and identifiers, into "not secure" websites.
  • The "Not Secure" animation will be noticed by many users of libraries, vendors, and publishers that haven't devoted resources to securing their websites. The users will file helpful bug reports and the website providers will acknowledge their prior misjudgments and start to work carefully to do what needs to be done to protect their users.
  • Libraries, vendors, and publishers will work together to address many interactions and dependencies in their internet systems.


Here's what I FEAR might happen:
  • The words "Not Secure" will cause people in charge to think their organizations' websites "have been hacked". 
  • Publishing executives seeing the "Not Secure" label will order their IT staff to "DO SOMETHING" without the time or resources to do a proper job.
  • Library directors will demand that Chrome be replaced by Firefox on all library computers because of a "BUG in CHROME". (creating an even worse problem when Firefox follows suit in a few months!) 
  • Library staff will put up signs instructing patrons to "ignore security warnings" on the internet. Patrons will believe them.
Back here in the real world, libraries are under-resourced and struggling to keep things working. The industry in general has been well behind the curve of HTTPS adoption, needlessly putting many library users at risk. The complicated technical environment, including proxy servers, authentication systems, federated search, and link servers has made the job of switching to secure connections more difficult.

So here's my forecast of what WILL happen:
  • Many libraries, publishers and vendors, motivated by Chrome 62, will finish their switch-over projects before October 17. Users of library web services will have better security and privacy. (For example, I expect OCLC's WorldCat, shown above in secure and not secure versions, will be in this category.)
  • Many switch-over projects will be rushed, and staff throughout the industry, both technical and user-facing, will need to scramble and cooperate to report and fix many minor issues.
  • A few not-so-thoughtful voices will complain that this whole security and privacy fuss is overblown, and blame it on an evil Google conspiracy.

Here are some notes to help you prepare:
  1. I've been asked whether libraries need to update links in their catalog to use the secure version of resource links. Yes, but there's no need to rush. Website providers should be using HTTP redirects to force users into the secure connections, and should use HSTS headers to make sure that their future connections are secure from the start.
  2. Libraries using proxy servers MUST update their software to reasonably current versions, and update proxy settings to account for secure versions of provider services. In many cases this will require acquisition of a wildcard certificate for the proxy server.
  3.  I've had publishers and vendors complain to me that library customers have asked them to retain the option of insecure connections ... because reasons. Recently, I've seen reports on listservs that vendors are being asked to retain insecure server settings because the library "can't" update their obsolete and insecure proxy software. These libraries should be ashamed of themselves - their negligence is holding back progress for everyone and endangering library users. 
  4. Chrome 62 is expected to reach beta next week. You'll then be able to install it from the beta channel. (Currently, it's in the dev channel.) Even then, you may need to set the #mark-non-secure-as flag to see the new behavior. Once Chrome 62 is stable, you may still be able to disable the feature using this flag.
  5. A screen capture using chrome 62 now might help convince your manager, your IT department, or a vendor that a website really needs to be switched to HTTPS.
  6. Mixed content warnings are the result of embedding not-secure images, fonts, or scripts in a secure web page. A malicious actor can insert content or code in these elements, endangering the user. Much of the work in switching a large site from HTTP to HTTPS consists of finding and addressing mixed content issues.
  7. Google's Emily Schechter gives an excellent presentation on the transition to HTTPS, and how the Chrome UI is gradually changing to more accurately communicate to users that non-HTTPS sites may present risks: https://www.youtube.com/watch?v=GoXgl9r0Kjk&feature=youtu.be (discussion of Chrome 62 changes starts around 32:00)
  8. (added 9/15/2017) As an example of a company that's been working for a while on switching, Elsevier has informed its ScienceDirect customers that ScienceDirect will be switching to HTTPS in October. They have posted instructions for testing proxy configurations.






Monday, August 14, 2017

PubMed Lets Google Track User Searches

CT scan of a Mesothelioma patient.
CC BY-SA by Frank Gaillard
If you search on Google for "Best Mesothelioma Lawyer" and then click on one of the ads, Google can earn as much as a thousand dollars for your click. In general, Google can make a lot of money if it knows you're the type of user who's interested in rare types of cancer. So you might be surprised that Google gets to know everything you search for when you use PubMed, the search engine offered by the National Center for Biotechnology Information (NCBI), a service of the National Library of Medicine (NLM) at the National Institutes of Health (NIH). Our tax dollars work really hard and return a lot of value at NCBI, but I was surprised to discover Google's advertising business is getting first crack at that value!

You may find this hard to believe, but you shouldn't take may word for it. Go and read the NLM Privacy Policy,  in particular the section on "Demographic and Interest Data"
On some portions of our website we have enabled Google Analytics and other third-party software (listed below), to provide aggregate demographic and interest data of our visitors. This information cannot be used to identify you as an individual. While these tools are used by some websites to serve advertisements, NLM only uses them to measure demographic data. NLM has no control over advertisements served on other websites.
DoubleClick: NLM uses DoubleClick to understand the characteristics and demographics of the people who visit NLM sites. Only NLM staff conducts analyses on the aggregated data from DoubleClick. No personally identifiable information is collected by DoubleClick from NLM websites. The DoubleClick Privacy Policy is available at https://www.google.com/intl/en/policies/privacy/
You can opt-out of receiving DoubleClick advertising at https://support.google.com/ads/answer/2662922?hl=en.
I will try to explain what this means and correct some of the misinformation it contains.

DoubleClick is Google's display advertising business. DoubleClick tracks users across websites using "cookies" to collect "demographic and interest information" about users. DoubleClick uses this information to improve its ad targeting. So for example, if a user's web browsing behavior suggests an interest in rare types of cancer, DoubleClick might show the user an ad about mesothelioma. All of this activity is fully disclosed in the DoubleClick Privacy Policy, which approximately 0% of PubMed's users have actually read. Despite what the NLM Privacy Policy says, you can't opt-out of receiving DoubleClick Advertising, you can only opt out of DoubleClick Ad Targeting. So instead of Mesothelioma ads, you'd probably be offered deals at Jet.com

It's interesting to note that before February 21 of this year, there was no mention of DoubleClick in the privacy policy (see the previous policy ). Despite the date, there's no reason to think that the new privacy policy is related to the change in administrations, as NIH Director Francis Collins was retained in his position by President Trump. More likely it's related to new leadership at NLM. In August of 2016, Dr. Patricia Flatley Brennan became NLM director. Dr. Brennan, a registered nurse and an engineer, has emphasized the role of data to the Library's mission. In an interview with the Washington Post, Brennan noted:
In the 21st century we’re moving into data as the basis. Instead of an experiment simply answering a question, it also generates a data set. We don’t have to repeat experiments to get more out of the data. This idea of moving from experiments to data has a lot of implications for the library of the future. Which is why I am not a librarian.
The "demographic and interest data" used by NLM is based on individual click data collected by Google Analytics. As I've previously written, Google Analytics  only tracks users across websites if the site-per-site tracker IDs can be connected to a global tracker ID like the ones used by DoubleClick. What NLM is allowing Google to do is to connect the Google Analytics user data to the DoubleClick user data. So Google's advertising business gets to use all the Google Analytics data, and the Analytics data provided to NLM can include all the DoubleClick "demographic and interest" data.

What information does Google receive when you do a search on Pubmed?
For every click or search, Google's servers receive:
  • your search term and result page URL
  • your DoubleClick user tracking ID
  • your referring page URL
  • your IP address
  • your browser software and operating system
While "only NLM staff conducts analyses on the aggregated data from DoubleClick", the DoubleClick tracking platform analyzes the unaggregated data from PubMed. And while it's true that "the demographic and interest data" of PubMed visitors cannot be used to identify them as  individuals, the data collected by the Google trackers can trivially be used to identify as individuals any PubMed users who have Google accounts. Last year, Google changed its privacy policy to allow it to associate users' personal information with activity on sites like PubMed.
"Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google.
So the bottom line is that Google's stated policies allow Google to associate a user's activity on PubMed with their personal information. We don't know if Google makes use of PubMed activity or if the data is saved at all, but NLM's privacy policy is misleading at best on this fact.

Does this matter? I have written that commercial medical journals deploy intense advertising trackers on their websites, far in excess of what NLM is doing. "Everybody" does it. And  we know that agencies of the US government spend billions of dollars sifting through web browsing data looking for terrorists, so why should NLM be any different? So what if Google gets a peek at PubMed user activity - they see such a huge amount of user data that PubMed is probably not even noticeable.

Google has done some interesting things with search data. For example, the "Google Flu Trends" and "Google Dengue Trends" projects studied patterns of searches for illness - related terms. Google could use the PubMed Searches for similar investigations into health provider searches.

The puzzling thing about NLM's data surrender is the paltry benefit it returns. While Google gets un-aggregated, personally identifiable data, all NLM gets is some demographic and interest data about their users. Does NLM really want to better know the age, gender, and education level of PubMed users??? Turning on the privacy features of Google Analytics (i.e. NOT turning on DoubleClick) has a minimal impact on the usefulness of the usage data it provides.

Lines need to be drawn somewhere. If Google gets to use PubMed click data for its advertising, what comes next? Will researchers be examined as terror suspects if they read about nerve toxins or anthrax? Or perhaps inquiries into abortifactants or gender-related hormone therapies will be become politically suspect. Perhaps someone will want a list of people looking for literature on genetically modified crops, or gun deaths, or vaccines? Libraries should not be going there.

So let's draw the line at advertising trackers in PubMed. PubMed is not something owned by a publishing company,  PubMed belongs to all of us. PubMed has been a technology leader worthy of emulation by libraries around the world. They should be setting an example. If you agree with me that NLM should stop letting Google track PubMed Users, let Dr. Brennan know (politely, of course.)

Notes:
  1. You may wonder if the US government has a policy about using third party services like Google Analytics and DoubleClick. Yes, there is a policy, and NLM appears to be pretty much in compliance with that policy.
  2. You might also wonder if Google has a special agreement for use of its services on US government websites. It does, but that agreement doesn't amend privacy policies. And yes, the person signing that policy for Google subsequently became the third CTO of the United States.
  3.  I recently presented a webinar which covered the basics of advertising in digital libraries in the National Network of Libraries of Medicine [NNLM] "Kernal of Knowledge" series.
  4. (8/16) Yes, this blog is served by Google. So if you start getting ads for privacy plug-ins...
  5. (8/16) urlscan.io is a tool you can use to see what goes on under the cover when you search on PubMed. Tip from Gary Price.

Monday, July 10, 2017

Creative Works *Ascend* into the Public Domain


It's a Wonderful Life, the movie, became a public domain work in 1975 when its copyright registration was not renewed. It had been a disappointment at the box office, but became a perennial favorite in the 80s as television stations began to play it (and play it again, and again) at Christmas time, partly because it was inexpensive content. Alas, copyright for the story it was based on, The Greatest Gift by Philip Van Doren Stern, HAD been renewed, and the movie was thus a derivative work on which royalties could be collected. In 1993, the owners of the story began to cash in on the film's popularity by enforcing their copyright on the story.

I learned about the resurrection of Wonderful Life from a talk by Krista Cox, Director of Public Policy Initiatives for ARL (Association of Research Libraries) during June's ALA Annual Conference. But I was struck by the way she described the movie's entry into the public domain. She said that it "fell into the public domain". I'd heard that phrase used before, and maybe used it myself. But why "fall"? Is the public domain somehow lower than the purgatory of being forgotten but locked into the service of a copyright owner? I don't think so. I think that when a work enters the public domain, it's fitting to say that it "ascends" into the public domain.

If you're still fighting this image in your head, consider this example: what happens when a copyright owner releases a poem from the chains of intellectual property? Does the poem drop to the floor, like a jug of milk? Or does it float into the sky, seen by everyone far and wide, and so hard to recapture?

It is a sad quirk of the current copyright regime that the life cycle of a creative work is yoked to the death of its creator. That seems wrong to me. Wouldn't it be better use the creator's birth date? We could then celebrate an author's birthday by giving their books the wings of an angel. Wouldn't that be a wonderful life?

Monday, June 12, 2017

Book Chapter on "Digital Advertising in Libraries"

I've written a chapter for a book, edited by Peter Fernandez and Kelly Tilton, to be published by ACRL. The book is tentatively titled Applying Library Values to Emerging Technology: Tips and Techniques for Advancing within Your Mission.

Digital Advertising in Libraries: or... How Libraries are Assisting the Ecosystem that Pays for Fake News

To understand the danger that digital advertising poses to user privacy in libraries, you first have to understand how websites of all stripes make money. And to understand that, you have to understand how advertising works on the Internet today.


The goal of advertising is simple and is quite similar to that of libraries. Advertisers want to provide information, narratives, and motivations to potential customers, in the hope that business and revenue will result. The challenge for advertisers has always been to figure out how to present the right information to the right reader at the right time. Since libraries are popular sources of information, they have long provided a useful context for many types of ads. Where better to place an ad for a new romance novel than at the end of a similar romance novel? Where better to advertise a new industrial vacuum pump but in the Journal of Vacuum Science and Technology? These types of ads have long existed without problems in printed library resources. In many cases the advertising, archived in libraries, provides a unique view into cultural history. In theory at least, the advertising revenue lowers the acquisition costs for resources that include the advertising.

On the Internet, advertising has evolved into a powerful revenue engine for free resources because of digital systems that efficiently match advertising to readers. Google's Adwords service is an example of such a system. Advertisers can target text-based ads to users based on their search terms, and they only have to pay if the user clicks on their ad. Google decides which ad to show by optimizing revenue—the price that the advertiser has bid times the rate at which the ad is clicked on. In 2016, Search Engine Watch reported that some search terms were selling for almost a thousand dollars per click. [Chris Lake, “The most expensive 100 Google Adwords keywords in the US,” Search Engine Watch (May 31, 2016).] Other types of advertising, such as display ads, video ads, and content ads, are placed by online advertising networks. In 2016, advertisers were projected to spend almost $75 billion on display ads; [Ingrid Lunden, “Internet Ad Spend To Reach $121B In 2014, 23% Of $537B Total Ad Spend, Ad Tech Boosts Display,” TechCrunch, (April 27, 2014).] Google's Doubleclick network alone is found on over a million websites. [“DoubleClick.Net Usage Statistics,” BuiltWith (accessed May 12, 2017). ]

Matching a user to a display ad is more difficult than search-driven ads. Without a search term to indicate what the user wants, the ad networks need demographic information about the user. Different ads (at different prices) can be shown to an eighteen-year-old white male resident of Tennessee interested in sports and a sixty-year-old black woman from Chicago interested in fashion, or a pregnant thirty-year-old woman anywhere. To earn a premium price on ad placements, the ad networks need to know as much as possible about the users: age, race, sex, ethnicity, where they live, what they read, what they buy, who they voted for. Luckily for the ad networks, this sort of demographic information is readily available, thank to user tracking.

Internet users are tracked using cookies. Typically, an invisible image element, sometimes called a "web bug," is place on the web page. When the page is loaded, the user's web browser requests the web bug from the tracking company. The first time the tracking company sees a user, a cookie with a unique ID is set. From then on, the tracking company can record the user's web usage for every website that is cooperating with the tracking company. This record of website visits can be mined to extract demographic information about the user. A weather website can tell the tracking company where the user is. A visit to a fashion blog can indicate a user's gender and age. A purchase of scent-free lotion can indicate a user's pregnancy. [Charles Duhigg, “How Companies Learn Your Secrets,” The New York Times Magazine, (February 16, 2012).] The more information collected about a user, the more valuable a tracking company's data will be to an ad network.

Many websites unknowingly place web bugs from tracking companies on their websites, even when they don't place adverting themselves. Companies active in the tracking business include AddThis, ShareThis, and Disqus, who provide functionality to websites in exchange for website placement. Other companies, such as Facebook, Twitter, and Google similarly track users to benefit their own advertising networks. Services provided by these companies are often placed on library websites. For example, Facebook’s “like” button is a tracker that records user visits to pages offering users the opportunity to “like” a webpage. Google’s “Analytics” service helps many libraries understand the usage of their websites, but is often configured to collect demographic information using web bugs from Google’s DoubleClick service.  [“How to Enable/Disable Privacy Protection in Google Analytics (It's Easy to Get Wrong!)” Go To Hellman (February 2, 2017).]

Cookies are not the only way that users are tracked. One problem that advertisers have with cookies is that they are restricted to a single browser. If a user has an iPhone, the ID cookie on the iPhone will be different from the cookie on the user's laptop, and the user will look like two separate users. Advanced tracking networks are able to connect these two cookies by matching browsing patterns. For example, if two different cookies track their users to a few low-traffic websites, chances are that the two cookies are tracking the same user. Another problem for advertisers occurs when a user flushes their cookies. The dead tracking ID can be revived by using "fingerprinting" techniques that depend on the details of browser configurations. [Gunes Acar, Christian Eubank, Steven Englehardt, Marc Juarez, Arvind Narayanan, and Claudia Diaz, “The Web Never Forgets: Persistent Tracking Mechanisms in the Wild.” In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS '14). ACM, New York, NY, USA, 674-689. DOI] Websites like Google, Facebook, and Twitter are able to connect tracking IDs across devices based on logins. 

Once a demographic profile for a user has been built up, the tracking profile can be used for a variety of ad-targeting strategies. One very visible strategy is "remarketing." If you've ever visited a product page on an e-commerce site, only to be followed around the Internet by advertising for that product, you've been the target of cookie-based remarketing.

Ad targeting is generally tolerated because it personalizes the user's experience of the web. Men, for the most part, prefer not to be targeted with ads for women’s products. An ad for a local merchant in New Jersey is wasted on a user in California. Prices in pounds sterling don't make sense to users in Nevada. Most advertisers and advertising networks take care not to base their ad targeting on sensitive demographic attributes such as race, religion, or sexual orientation, or at least they try not to be too noticeable when they do it.

The advertising network ecosystem is a huge benefit to content publishers. A high traffic website has no need of a sales staff—all they need to do is be accepted by the ad networks and draw users who either have favorable demographics or who click on a lot of ads. The advertisers often don't care about what websites their advertising dollars support. Advertisers also don't really care about the identity of the users, as long as they can target ads to them. The ad networks don't want information that can be traced to a particular user, such as email address, name or home address. This type of information is often subject to legal regulations that would prevent exchange or retention of the information they gather, and the terms of use and so-called privacy policies of the tracking companies are careful to specify that they do not capture personally identifiable information. Nonetheless, in the hands of law enforcement, an espionage agency, or a criminal enterprise, the barrier against linking a tracking ID to the real-world identity of a user is almost non-existent.

The amount of information exposed to advertising networks by tracking bugs is staggering. When a user activates a web tracker, the full URL of the referring page is typically revealed. The user's IP address, operating system, and browser type is sent along with a simple tracker; the JavaScript trackers that place ads typically send more detailed information.  It should be noted that any advertising enterprise requires a significant amount of user information collection; ad networks must guard against click-jacking, artificial users, botnet activity and other types of fraud. [Samuel Scott, “The Alleged $7.5 Billion Fraud in Online Advertising,” Moz, (June 22, 2015).]

Breitbart.com is a good example of a content site supported by advertising placed through advertising networks. A recent visit to the Breitbart home page turned up 19 advertising trackers, as characterized by Ghostery: [Ghostery is a browser plugin that can identify and block the trackers on a webpage.]
  • 33Across
  • [x+1]
  • AddThis
  • adsnative
  • Amazon Associates
  • DoubleClick
  • eXelate
  • Facebook Custom Audience
  • Google Adsense
  • Google Publisher Tags
  • LiveRamp
  • Lotame
  • Perfect Market
  • PulsePoint
  • Quantcast
  • Rocket Fuel
  • ScoreCard Research Beacon
  • Taboola
  • Tynt

While some of these will be familiar to library professionals, most of them are probably completely unknown, or at least their role in the advertising industry may be unknown. Amazon, Facebook and Google are the recognizable names on this list; each of them gathers demographic and transactional data about users of libraries and publishers. AddThis, for example, is a widget provider often found on library and publishing sites. They don't place ads themselves, but rather, they help to collect demographic data about users. When a library or publisher places the AddThis widget on their website, they allow AddThis to collect demographic information that benefits the entire advertising ecosystem. For example, a visitor to a medical journal might be marked as a target for particularly lucrative pharmaceutical advertising.

Another tracker found on Breitbart is Taboola. Taboola is responsible for the "sponsored content" links found even on reputable websites like Slate or 538.com. Taboola links go to content that is charitably described as clickbait and is often disparaged as "fake news." The reason for this is that these sites, having paid for advertising, have to sell even more click-driven advertising. Because of its links to the Trump Administration, Breitbart has been the subject of attempts to pressure advertisers to stop putting advertising on the site.  A Twitter account for "Sleeping Giants" has been encouraging activists to ask businesses to block Breitbart from placing their ads. [Osita Nwanevu, “‘Sleeping Giants’ Is Borrowing Gamergate’s Tactics to Attack Breitbart,” Slate, December 14, 2016.] While several companies have blocked Breitbart in response to this pressure, most companies remain unaware of how their advertising gets placed, or that they can block such advertising. [Pagan Kennedy, “How to Destroy the Business Model of Breitbart and Fake News,” The New York Times (January 7, 2017).] 

I'm particularly concerned about the medical journals that participate in advertising networks. Imagine that someone is researching clinical trials for a deadly disease. A smart insurance company could target such users with ads that mark them for higher premiums. A pharmaceutical company could use advertising targeting researchers at competing companies to find clues about their research directions. Most journal users (and probably most journal publishers) don't realize how easily online ads can be used to gather intelligence as well as to sell products.

It's important to note that reputable advertising networks take user privacy very seriously, as their businesses depend on user acquiescence. Google offers users a variety of tools to "personalize their ad experience." [If you’re logged into Google, the advertising settings applied when you browse can be viewed and modified.] Many of the advertising networks pledge to adhere to the guidance of the "Network Advertising Initiative" [The NAI Code and Enforcement Program: An Overview,”],  an industry group.  However, the competition in the web-advertising ecosystem is intense, and there is little transparency about enforcement of the guidance. Advertising networks have been shown to spread security vulnerabilities and other types of malware when they allow JavaScript in advertising payloads. [Randy Westergren, “Widespread XSS Vulnerabilities in Ad Network Code Affecting Top Tier Publishers, Retailers,” (March 2, 2016).]

Given the current environment, it's incumbent on libraries and the publishing industry to understand and evaluate their participation in the advertising network ecosystem. In the following sections, I discuss the extent of current participation in the advertising ecosystem by libraries, publishers, and aggregators serving the library industry.

Publishers

Advertising is a significant income stream for many publishers providing content to libraries. For example, the Massachusetts Medical Society, publisher of the New England Journal of Medicine, takes in about $25 million per year in advertising revenue. Outside of medical and pharmaceutical publishing, advertising is much less common. However, advertising networks are pervasive in research journals.

In 2015, I surveyed the websites of twenty of the top research journals and found that sixteen of the top twenty journals placed ad network trackers on their websites. [“16 of the Top 20 Research Journals Let Ad Networks Spy on Their Readers,” Go To Hellman (March 12, 2015). ]
Recently, I revisited the twenty journals to see if there had been any improvement. Most of the journals I examined had added tracking on their websites. The New England Journal of Medicine, which employed the most intense reader tracking of the twenty, is now even more intense, with nineteen trackers on a web page that had "only" fourteen trackers two years ago. A page from Elsevier's Cell went from nine to sixteen trackers. [“Reader Privacy for Research Journals is Getting Worse,” Go To Hellman (March 22, 2017). ] Intense tracking is not confined to subscription-based health science journals; I have found trackers on open access journals, economics journals, even on journals covering library science and literary studies.

It's not entirely clear why some of these publishers allow advertising trackers on their websites, because in many cases, there is no advertising. Perhaps they don’t realize the impact of tracking on reader privacy. Certainly, publishers that rely on advertising revenue need to carefully audit their advertising networks and the sorts of advertising that comes through them. The privacy commitments these partners make need to be consistent with the privacy assurances made by the publishers themselves. For publishers who value reader privacy and don't earn significant amounts from advertising, there's simply no good reason for them to continue to allow tracking by ad networks.

Vendors

The library automation industry has slowly become aware of how the systems it provides can be misused to compromise library patron privacy. For example, I have pointed out that cover images presented by catalog systems were leaking search data to Amazon, which has resulted in software changes by at least one systems vendor. [“How to Check if Your Library is Leaking Catalog Searches to Amazon,” Go To Hellman (December 22, 2016).] These systems are technically complex, and systems managers in libraries are rarely trained in web privacy assessment. Development processes need to include privacy assessments at both component and system levels.

Libraries

There is a mismatch between what libraries want to do to protect patron privacy and what they are able to do. Even when large amounts of money are at stake, there is often little leverage for a library to change the way a publisher delivers advertising bearing content. Nonetheless, together with cooperating IT and legal services, libraries have many privacy-protecting options at their disposal. 
  1. Use aggregators for journal content rather than the publisher sites. Many journals are available on multiple platforms, and platforms marketed to libraries often strip advertising and advertising trackers from the journal content. Reader privacy should be an important consideration in selecting platforms and platform content.
  2. Promote the use of privacy technologies. Privacy Badger is an open-source browser plugin that knows about, and blocks tracking of, users. Similar tools include uBlock Origin, and the aforementioned Ghostery.
  3. Use proxy-servers. Re-writing proxy servers such as EZProxy are typically deployed to serve content to remote users, but they can also be configured to remove trackers, or to forcibly expire tracking cookies. This is rarely done, as far as I am aware.
  4. Strip advertising and trackers at the network level. A more aggressive approach is to enforce privacy by blocking tracker websites at the network level. Because this can be intrusive (it affects subscribed content and unsubscribed content equally) it's appropriate mostly for corporate environments where competitive-intelligence espionage is a concern.
  5. Ask for disclosure and notification. During licensing negotiations, ask the vendor or publisher to provide a list of all third parties who might have access to patron clickstream data. Ask to be notified if the list changes. Put these requests into requests for proposals. Sunlight is a good disinfectant.
  6. Join together with others in the library and publishing industry to set out best practices for advertising in web resources.

Conclusion

The widespread infusion of the digital advertising ecosystem into library environments presents a new set of challenges to the values that have been at the core of the library profession. Advertising trackers introduce privacy breaches into the library environment and help to sustain an information-delivery channel that operates without the values grounding that has earned libraries and librarians a deep reserve of trust from users. The infusion has come about through a combination of commercial interest in user demographics, consumer apathy about privacy, and general lack of understanding of a complex technology environment. The entire information industry needs to develop understanding of that environment so that it can grow and evolve to serve users first, not the advertisers.

Tuesday, May 30, 2017

Readium's New Licensed Content Protection May Result in Better Reader Privacy

CC BY
Libraries offering ebook lending are between a rock and a hard place. They know in their heart of hearts that digital rights management (DRM) software is evil, but not allowing users to borrow the ebooks they want to read is not exactly the height of virtue. Saintly companies like Amazon will be happy to fill the gaps if libraries can't lend ebooks. The fundamental problem is that "borrowing" is a fiction, a conceptual construct, when applied to the ones and zeroes of a digital book. An ebook loan is really a short-term license. Under today's copyright law, a reader must have a license to read an ebook, and ebook rights-holders don't trust users to adhere to short-term licenses without some sort of software to enforce the license.

Unless the rock becomes a marshmallow, libraries that want to improve the ebook lending experience are hoping to make the hard place a bit softer. The most common DRM system used in libraries is run by Adobe. Adobe Content Server (ACS) is used by Overdrive, Proquest, EBSCO and Bibliotheca's Cloud Library. Adobe Content Server is a hard place for libraries in two ways. First, a payment must be made to Adobe for every lending transaction processed through ACS. Second, use of ACS affects reader privacy. When ACS first came out, Adobe got to know the identity of every borrower. Adobe says this about these records:
"Adobe keeps internet protocol (IP) address logs related to Adobe ID sign-ins for 90 days"
I wish they also said they destroyed these logs. Their privacy policy says:
"Your personal information and files are stored on Adobe’s servers and the servers of companies we hire to provide services to us. Your personal information may be transferred across national borders because we have servers located worldwide and the companies we hire to help us run our business are located in different countries around the world."
... and generally says that reader should trust Adobe to not betray you.

Thanks in part to demand from libraries and the companies that serve them, Adobe changed ACS so that borrower identities could be de-identified by intermediaries such as Overdrive. So instead of relying on Adobe's sometimes lax privacy protections, libraries could rely on vendors more responsive to library concerns. But still, the underlying DRM technology was designed to trust Adobe, and to distrust readers. Its centralized architecture requires everyone to trust participants closer to the center. A reader's privacy requires trust of the library or bookstore, which in turn have to trust a vendor, who in turn have to trust Adobe.

This state of affairs has been the motivation for the Readium Foundation's new DRM technology, called Readium Licensed Content Protection (LCP). LCP's developers claim that it offers libraries a low cost way to improve the library ebook lending experience while providing readers with the privacy assurances they expect from libraries. In addition, Readium describes LCP as Open Source... except for a few lines of code. To understand LCP, and to see if it delivers on the developer's claims, I took a close look at the recently released spec. The short description of what I found is that it can do what it claims to do... but everything depends on the implementation. Also, DRM may be a Hofstadter-Moebius loop.

Now for the longer description:

Every DRM system uses encryption and secrets. Centralized DRM systems such as ACS keep a centralized secret, and use that secret to generate, distribute and control keys that lock and unlock content. LCP takes a somewhat different approach. It uses two secrets to lock and unlock content, a user secret and and ecosystem secret. An "ecosystem" is all the libraries, booksellers, and reading system vendors who agree to interoperate. Any software that knows the ecosystem secret can combine it with a user's secret to unlock content that has been locked for a user. This way multiple content providers in an ecosystem can independently lock content for a user- there's no requirement for a central key server.

The LCP DRM system has some interesting usability and privacy features. If you want to read on several devices, you just need to remember your encryption secret, and you can move files from one device to another. If you want to share an ebook with a family member or close friend, that's ok too, as long as you're comfortable sharing your encryption secret. If you want to read anonymously, can have have a trusted friend borrow the book on your behalf. But to get publisher buy-in for these usability features, the system has to have a way for content providers to limit oversharing. Content providers don't want you to just post the file and the password on a pirate file-sharing service. So ecosystem software applications are required to "phone home" with a device identifier and license identifier when they are connected to the internet.

As you might imagine, the LCP phone-home information could have an impact on reader privacy, depending on the implementation. So for example, if you borrow a book from the library, and your reader app contacts the library to say you've opened the book, your privacy is minimally impacted since the library already knows you borrowed the book. But if the phone-home transaction is unencrypted, or if it contains too much information, then your employer might be able to find out about the union-organizer book you're reading. If the libraries or booksellers can aggregate all their phone-home logs, then your detailed reading profile could be compiled and exploited. Or if users are not permitted to select their own encryption secret, it might be much harder to read a book anonymously. (Note: my suggested changes for improving these parts of the spec were accepted by the spec's authors.) But if everything is implemented with a view to reader privacy, LCP should offer much better reader privacy than possible with existing systems.

There's some bad news, however. Because the ecosystem secret has to be protected, the openness of the reader software is not quite what it seems. The code will need to be obfuscated before distribution, and the secret will only be available to developers and to distribution channels that are willing and able to "harden" their software. If you want to fork the software to add a feature, your build will not be able to unlock ecosystem content until the ecosystem overlords deign to approve your changes. So don't expect reader software with lots of plugins and options. Don't expect a javascript web-reader.

The code obfuscation raises another issue: it will be difficult to audit reader software to make sure it doesn't harbor spyware, even if the source code is open (except for the ecosystem secret). You still have have to trust app provider, your library and the people who sell you books. But it's hard to get far without trusting somebody, so this isn't a new problem, and when was the last time anyone audited library software? And because the ecosystem overlords distribute the ecosystem secrets to trusted developers, the topology of trust and accountability is very different from Adobe's centralized system.

If you didn't like that bad news, that cloud may have a silver lining, or maybe a lead lining, depending on your perspective. If LCP becomes widely used, the ecosystem secret will inevitably leak, and an anti-ecosystem could form. There will be a Calibre plugin to strip encryption. There will be grayware that does everything that the ecosystem software isn't permitted to do. And it might even be sort-of legal to use. Library ebook lending might flourish. Or collapse. Because in the end, ebook lending requires trust to flow in both directions; while it's not perfect, LCP is a baby step in the direction of mutual trust between readers and content providers.

In Stanley Kubrick's 2001: A Space Odyssey, the computer HAL 9000 goes insane. The reason:
HAL's crisis was caused by a programming contradiction: he was constructed for "the accurate processing of information without distortion or concealment", yet his orders, directly from Dr. Heywood Floyd at the National Council on Astronautics, required him to keep the discovery of the Monolith TMA-1 a secret for reasons of national security. This contradiction created a "Hofstadter-Moebius loop", reducing HAL to paranoia. 
Readium LCP software is sort of like HAL 9000. It's charged with opening up information to readers, with expanding minds everywhere, transporting them to worlds of new knowledge and imagination, yet it must work to keep a secret and prevent users from doing things that copyright owners don't want them to do. Let's hope that the P in LCP doesn't stand for "Paranoia".

Sunday, April 2, 2017

Copyrighted Clickstream Poetry to Stop ISP Click-Selling

Congress won't let the Federal Communications Commission (FCC) protect users from Internet Service Provider (ISP) snooping-for-cash. My ISP could decide to sell a list of all the websites I visit to advertisers, and the FCC can't stop them. I wondered if there was some way I could use copyright law to prevent my ISP from selling copies of my clickstream.

So I invented "clickstream poetry". Here is my first clickstream poem, entitled My clicks are mine:
{
    "content":       
        [
        "https://roses.com",
        "http://are.com",
        "https://reddit.com",
        "http://theultraviolets.net",
        "http://are.com",
        "https://moo.com",
        "http://this.is",
        "http://work.org",
        "http://is.com",
        "https://copyright.com",
        "https://ted.com",
        "https://www.so.ch",
        "http://verizon.com",
        "http://www.faa.gov",
        "https://kyu.com",
        "https://copyright.com",
        "http://2o17.com",
        "http://eric.org",
        "http://hellman.net",
        "https://creativecommons.org/licenses/by-nc/4.0/legalcode"
        ],
    "copyright": "2017 Eric Hellman",
    "license": "https://creativecommons.org/licenses/by-nc/4.0/legalcode",
    "title": "My clicks are mine"
}

I wrote a python script that "performs" the poem for the benefit of anyone listening to my clickstream. The script requests the websites in the poem in a random order; the listener will see the website names requested, and this dataset comprises the "poem". I used a Creative Commons license that doesn't let anyone distribute copies of my poem for commercial purposes. If my ISP tries to sell a copy of my clickstream, they would be violating the license, and thus infringing my copyright to the poem. If you run the script to perform the poem (for non-commercial purposes, of course), your ISP would similarly be infringing my copyright if they try to sell your clickstream.

If I tried to sue an ISP for copyright infringement, they would likely argue that though my creation is original and used in its entirety, selling my clickstream is a "fair use". They would assert that advertising optimization (or whatever) is a "transformative use" and that it didn't affect the market for my poem. Who would pay anything for a stupid clickstream poem? How would a non-existent, hypothetical market for clickstream poetry be harmed by use in their big data algorithms?

That's why I'm offering commercial licenses to the clickstream poem My clicks are mine. This will demonstrate that a commercial market for clickstream poetry licenses exists. For only $10, you can use a copy of my poem for any purpose whatsoever, for a period of 24 hours. If an ad network wants to use my clickstream to optimize the ads they show me, more power to them, as long as they pay for a license. I imagine that, over the lifetime of my poem's copyright protection (into the 22nd century), clickstream poetry will become increasingly valuable because of uses that haven't been invented yet.

To acquire a commercial license to my poem, support my work at the Free Ebook Foundation, a 501(c)3 not-for-profit corporation, by making a donation. Or don't. I have no idea if a court would take my side against a big company (and against Congress). I'm told that judges are generally skeptical of clever "legal hacks" unless they are crafted by lawyers instead of engineers.

ISPs would probably figure out a legal or technical subterfuge around the copyright of my clickstream poem; but if they have to worry even a little, this effort will have been worth my time.

Update: I have now paid $35 to register my copyright to My clicks are mine.

Wednesday, March 22, 2017

Reader Privacy for Research Journals is Getting Worse

Ever hear of Grapeshot, Eloqua, Moat, Hubspot, Krux, or Sizmek? Probably not. Maybe you've heard of Doubleclick, AppNexus, Adsense or Addthis? Certainly you've heard of Google, which owns Doubleclick and Adsense. If you read scientific journal articles on publisher websites, these companies that you've never heard of will track and log your reading habits and try to figure out how to get you to click on ads, not just at the publisher websites but also at websites like Breitbart.com and the Huffington Post.

Two years ago I surveyed the websites of 20 of the top research journals and found that 16 of the top 20 journals placed trackers from ad networks on their web sites. Only the journals from the American Physical Society (2 of the 20) supported secure (HTTPS) connections, and even now APS does not default to being secure.

I'm working on an article about advertising in online library content, so I decided to revisit the 20 journals to see if there had been any improvement. Over half the traffic on the internet now uses secure connections, so I expected to see some movement. One of the 20 journals, Quarterly Journal of Economics, now defaults to a secure connection, significantly improving privacy for its readers. Let's have a big round of applause for Oxford University Press! Yay.

So that's the good news. The bad news is that reader privacy at most of the journals I looked at got worse. Science, which could be loaded securely 2 years ago, has reverted to insecure connections. The two Annual Reviews journals I looked at, which were among the few that did not expose users to advertising network tracking, now have trackers for AddThis and Doubleclick. The New England Journal of Medicine, which deployed the most intense reader tracking of the 20, is now even more intense, with 19 trackers on a web page that had "only" 14 trackers two years ago. A page from Elsevier's Cell went from 9 to 16 trackers.

Despite the backwardness of most journal websites, there are a few signs of hope. Some of the big journal platforms have begun to implement HTTPS. Springer Link defaults to HTTPS, and Elsevier's Science Direct is delivering some of its content with secure connections. Both of them place trackers for advertising networks, so if you want to read a journal article securely and privately, your best bet is still to use Tor.

Thursday, February 2, 2017

How to enable/disable privacy protection in Google Analytics (it's easy to get wrong!)

In my survey last year of ARL library web services, I found that 72% of them used Google Analytics. So it's not surprising that a common response to my article about leaking catalog searches to Amazon was to wonder whether the same thing is happening with Google Analytics.

The short answer is "It Depends". It might be OK to use Google Analytics on a library search facility, if the following things are true:
  1. The library trusts Google on user privacy. (Many do.)
  2. Google is acting in good faith to protect user privacy and is not acting under legal compulsion to act otherwise. (We don't really know.)
  3. Google Analytics is correctly doing what their documentation says they are doing and not being circumvented by the rest of Google. (They don't always.)
  4. The library has implemented Google Analytics correctly to enable user privacy.
There's an entire blog post to write about each of the first three conditions, but I have only so many hours in a day.  Given that many libraries have decided that the benefits using of Google Analytics outweigh the privacy risks, the rest of this post concerns only this last condition. Of the 72% of ARL libraries that use Google Analytics, I find that only 19% of them have implemented Google Analytics with privacy-protection features enabled.

So, if you care about library privacy but can't do without Google Analytics, read on!

Google Analytics has a lot of configuration options, which is why webmasters love it. For the purposes of user privacy, however, there are just two configuration options to pay attention to, the "IP Anonymization" option and the "Display Features" option.

IP Anonymization says to Google Analytics "please don't remember the exact IP address of my users". According to Google, enabling this mode masks the least significant bits of the user's IP address before the IP address is used or saved. Since many users can be identified by their IP address, this prevents anyone from discovering the search history for a given IP address. But remember, Google is still sent the IP address, and we have to trust that Google will obscure the IP address as advertised, and not save it in some log somewhere. Even with the masked IP address, it may still be possible to identify a user, particularly if a library serves a small number of geographically dispersed users.

"Display Features" says to Google to that you don't care about user privacy, and it's OK to track your users all to hell so that you can get access to "demographic" information. To understand what's happening, it's important to understand the difference between "first-party" and "third-party" cookies, and how they implicate privacy differently.

Out of the box, Google Analytics uses "first party" cookies to track users. So if you deploy Google Analytics on your "library.example.edu" server, the tracking cookie will be attached to the library.example.edu hostname. Google Analytics will have considerable difficulty connecting user number 1234 on the library.example.edu domain with user number 5678 on the "sci-hub.info" domain, because the user ids are chosen randomly for each hostname. But if you turn on Display Features, Google will connect the two user ids via a third party tracking cookie from its Doubleclick advertising service. This enables both you and Google to know more about your users. Anyone with access to Google's data will be able to connect the catalog searches saved for user number 1234 to that user's searches on any website that uses Google advertising or any site that has Display Features turned on.

IP Anonymization and Display Features can be configured in Google Analytics in three ways, depending on how it's being configured. The instructions here apply to the "Universal Analytics" script. You can tell a site uses Universal Analytics because the pages execute a javascript named "analytics.js". An older "classic" version of Google Analytics uses a script named "ga.js"; its configuration is similar to that of Universal. More complex websites may use Google Tag Manager to deploy and configure Google Analytics.

Google Analytics is usually deployed on a web page by inserting a script element that looks like this:
<script>
    (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
    (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
    m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
    })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
    ga('create', 'UA-XXXXX-Y', 'auto');
    ga('send', 'pageview');
</script>
IP Anonymization and Display Features are turned on with extra lines in the script:
    ga('create', 'UA-XXXXX-Y', 'auto');
    ga('require', 'displayfeatures');  // starts tracking users across sites
    ga('set', 'anonymizeIp', true); // makes it harder to identify the user from logs
    ga('send', 'pageview');
The Google Analytics Admin allows you to turn on cross site user tracking, though the privacy impact of what you're doing is not made clear . In the "Data Collection" item of the Tracking info pane, look at the toggle switches for "Remarketing" and "Advertising Reporting Features" if these are switched to "ON", then you've enabled cross site tracking and your users can expect no privacy.

Turning on IP anonymization is not quite as easy and turning on cross-site tracking. You have to add it explicitly in your script or turn it on in Google tag manager (where you won't find it unless you know what to look for!).

To check if cross-site tracking has been turned on in your institution's Google Analytics, use the procedures I described in my article on How to check if your library is leaking catalog searches to Amazon.  First, clear the cookies for your website, then load your site and look at the "Sources" tab in Chrome developer tools. If there's a resource from "stats.g.doubleclick.net", then your website is asking google to track your users across sites. If your institution is a library, you should not be telling Google to track your users across sites.

Bottom line: if you use Google Analytics, always remember that Google is fundamentally an advertising company and it will seldom guide you towards protecting your users' privacy.