Should the term "Open Access" be restricted to materials with licenses that allow redistribution, like Creative Commons licenses? Or, as some advocate, only materials that allow remixing and commercial re-use, like CC-BY and CC-BY-SA?
I had lunch today with folks from OpenEditions, a French publishing organization whose ebook effort I've been admiring for a while. They're here in Las Vegas at the American Library Association Conference, hoping to get libraries interested in the 1,428 ebooks they have on their platform. (Booth 1437!)
Of those 1,076 books are in a program they call "Open Access Fremium". With these books, you can read them on the OpenEditions website for free, without having to register or anything. You can even embed them into your blog. So for example, here's Opinion Mining et Sentiment Analysis by Dominique Boullier and Audrey Lohard:
So is it OpenAccess™?
In this freemium model, the main product that's being sold is access to the downloadable ebook- whether PDF or EPUB. For libraries, a subscription allows for unlimited access with IP address authentication along with additional services. Creative Commons licenses, all of which allow for format conversion, wouldn't work for this business model because the free HTML could easily be converted into EPUB and PDF. They have their own license, you can read it here.
This is clearly not completely open, but there's no doubt that it's usefully open. For me, the biggest problem is that if OpenEditions goes away for some reason- business, politics, natural disaster, or stupidity, then the ebooks disappear. Similarly, if OpenEditions policies change or urls move, they could break the embed.
On the plus side, OpenEditions have convinced a group of normally conservative publishers of the advantages of creating usefully open versions of over a thousand books. It's a step in the right direction.
Sunday, June 29, 2014
Saturday, June 28, 2014
Overdrive is Making My Crazy Dream Come True
Fifteen years ago, I had this crazy dream. I imagined that popular websites would use fancy links to let their readers get books from their local libraries. And that search engines would prefer these links because their users would love to have access to their library books. I built a linking technology and tried to get people to use it. It never took off. I went on to do other things, but it was a good dream.
Tonight, at the opening of the exhibits at the American Library Association in Las Vegas, Steve Potash, the Founder of Overdrive, pulled me aside and said he had something cool to show me.
Wow!
The read on site thing works sometimes and doesn't work sometimes, so there are still a bunch of kinks for Overdrive to work out. But that's not really the point.
The reason why the Huffposts and Buzzfeeds of the world like this is not so much the customization of a link, which is what I was trying to sell, but rather the fact the the book is embedded on the host web site. This keeps people on their site longer, and they click more ads.
Embeds are the magic of the day. You heard it here first.
Speaking of dreams, I'm having a hard time in Vegas figuring out what's real and what isn't.
Tonight, at the opening of the exhibits at the American Library Association in Las Vegas, Steve Potash, the Founder of Overdrive, pulled me aside and said he had something cool to show me.
- Go search for a popular book on Bing or try this one.
- Notice the infobox on the right. Look at the Read This Book link. Click it.
- Now check out this Huffington Post article. Note the embedded book sample.
- If the Overdrive system recognizes you, it's taken you to your library's overdrive collection. If not, when you click "Borrow" you get a list of Overdrive libraries near you.
The read on site thing works sometimes and doesn't work sometimes, so there are still a bunch of kinks for Overdrive to work out. But that's not really the point.
The reason why the Huffposts and Buzzfeeds of the world like this is not so much the customization of a link, which is what I was trying to sell, but rather the fact the the book is embedded on the host web site. This keeps people on their site longer, and they click more ads.
Embeds are the magic of the day. You heard it here first.
Speaking of dreams, I'm having a hard time in Vegas figuring out what's real and what isn't.
Posted by
Eric
at
12:28 AM
2
comments
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels:
ALA Annual,
ebooks,
Libraries,
Overdrive
Friday, June 20, 2014
One Human Brain = 8 Global Internets (Total Data Rate)
It's a trope of science fiction movies. Somebody gets the content of their brain sucked out and transferred to some other brain over a wire. So how much data would that be?
This question occurred to me a while ago when I attended "Downloading the Brain", a panel discussion hosted by the World Science Festival. Michel Maharbiz of UC Berkeley talked about the "Neural Dust" his group is developing. (Apparently the paper with their first results is in the review process.) The "Dust" is a new way to monitor neurons, without needing a wire. A tiny CMOS silicon chip just a hundred microns or so is place on the neuron. On the chip is a piezoelectric resonator. An acoustic wave is used to power and interrogate the chip, and thus read out the electric fields in the neuron. It's very similar to the way that RFID chips are interrogated by radio waves to do things like locating books in a library or trucks on a highway.
So imagine you could use this neural dust, one mote per neuron, to "read out" a brain. How much data would this be?
Get out your envelope backs!
The cerebral cortex of a typical human brain has about 20 billion neurons. Which is a lot of neurons, so forget about dusting them. But pretend you could. I think ten thousand bytes per second ought to be enough to capture the output of a neuron. A single neuron isn't that fast- it fires at most 200 times a second. But we don't really know how much information is in the precise timing of the pulses, or even if the magnitude of a pulse encodes additional information. (There are a HUGE number of things we don't know about how the brain works!) So 200 trillion bytes per second, or 1.6 petabits per second should be roughly enough to transmit the complete state of a brain. (Not including the connection patterns. That's a whole 'nother envelope!)
How much is that?
It's about 1 optical fiber. The maximum conceivable bandwidth of a single-mode optical fiber is set by the frequency range where it's clear enough to transmit light without melting. That number is about 1 petabit (10^15)/s, depending on the transmission distance. (see this discussion on StackExchange). Bit rates of about 10% of that have been achieved in the laboratory using "Space Division Multiplexing" (see this news article or this keynote from a real expert), while the current generation of optical networking products use multiple channels of 100 Gigabit Ethernet to achieve as much as 10Tb/s on a fiber, about 1% of the theoretical limit. A petabit per second is a ways in the future, but so is our neuro-dust.) Even now, we could probably fit a brain dump on 16 of the laboratory fiber systems.
So we can imagine putting the bandwidth of a brain onto a cable we can hold in our hands.
But how much is THAT?
Cisco puts out a report every year estimating the total traffic on the internet. This year, they're estimating that the total IP traffic in the world is 62,476 petabytes per month. That's about 190 terabits/second. So a brain readout would be about 8 times the internet's total data rate.
Right now, powering that much dust would be impractical. Currently a neural dust mote uses a half a milliwatt of power, which means 10 megawatts to read the whole brain. So it gets fried, but it was just watching Netflix.
http://arxiv.org/abs/1307.2196 |
So imagine you could use this neural dust, one mote per neuron, to "read out" a brain. How much data would this be?
Get out your envelope backs!
http://dx.doi.org/10.3389/neuro.11.002.2010 |
How much is that?
It's about 1 optical fiber. The maximum conceivable bandwidth of a single-mode optical fiber is set by the frequency range where it's clear enough to transmit light without melting. That number is about 1 petabit (10^15)/s, depending on the transmission distance. (see this discussion on StackExchange). Bit rates of about 10% of that have been achieved in the laboratory using "Space Division Multiplexing" (see this news article or this keynote from a real expert), while the current generation of optical networking products use multiple channels of 100 Gigabit Ethernet to achieve as much as 10Tb/s on a fiber, about 1% of the theoretical limit. A petabit per second is a ways in the future, but so is our neuro-dust.) Even now, we could probably fit a brain dump on 16 of the laboratory fiber systems.
So we can imagine putting the bandwidth of a brain onto a cable we can hold in our hands.
But how much is THAT?
Cisco puts out a report every year estimating the total traffic on the internet. This year, they're estimating that the total IP traffic in the world is 62,476 petabytes per month. That's about 190 terabits/second. So a brain readout would be about 8 times the internet's total data rate.
Right now, powering that much dust would be impractical. Currently a neural dust mote uses a half a milliwatt of power, which means 10 megawatts to read the whole brain. So it gets fried, but it was just watching Netflix.
Posted by
Eric
at
7:13 PM
2
comments
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels:
Neurobiology,
physics
Subscribe to:
Posts (Atom)