Wednesday, May 30, 2007

A nobelist in Senate, and a failure to educate

I just happened to read in Science that the 2003 chemistry Nobel laureate Peter Agre was contemplating running for Senate in Minnesota. I thought this was good news for a number of reasons.

One important reason that there is a serious paucity of leaders currently in the US Senate and Congress who can make even a simple, coherent argument about science, leave alone make sound policy decisions on science. In an ideal democracy, the leadership of a nation should represent a diverse spectrum of the intelligentsia of that country, as well as have a robust representation of “common folk”. However, it seems like the required educational qualification for Senate or Congress is law, and a majority of the country leaders are lawyers by training. I’ve nothing against lawyers, but while they can argue about anything, it helps if you have well qualified representatives from diverse areas, all of which are critical for the country’s progress (science, economics, international affairs and health, to name just a few). It can easily be argued that having such leaders will allow issues like stem cell research or abortion to be discussed cogently, with facts on board, and will avoid purely emotional responses, and knee-jerk counter responses.

The US probably has more active scientists than the rest of the world put together, including dozens of Nobel laureates. Many of them have also proven to be able administrators and have helped drive science policy forward. Surprisingly, not one single Nobel laureate has ever been elected to Senate or Congress. Nor are there many scientists in Senate or Congress currently (I read an article months ago that said there were four total. I apologize for not being able to find the link). While I don’t have numbers, this seems to be in contrast to Europe, where there are a number of high profile scientist-politicians (including Angela Merkel, the chancellor of Germany).

I don’t think Agre has a serious chance of winning (or beating Al Franken), but it’s a start, so good luck to him.

(On a somewhat related note, it is amusing to note that the Indian minister for Science and Technology is a celebrated lawyer, Kapil Sibal. While being a lawyer shouldn’t be held against him, a lifelong lack of any commitment or initiative towards science can be.)

***********

I’ve always thought that there is a serious disconnect between environmental education in schools, and the environment itself. The end result is that the student has really little clue about how interconnected things are, how easy it is to cause major damage to the environment, and how all our little actions may appear to be unrelated to anything damaging the environment, but actually does cause immense harm. This starts from simple things like leaving the lights or air-conditioning on when we leave home. You think it’s just a little hit on the electricity bill, but it goes all the way to burning more fossil fuels to get that energy, which comes from mining coal or petroleum, with the associated direct environmental damage, and pollution.

Anyway, while scanning through PLoS biology I happened to read this commentary, which more clearly and coherently puts together an argument on the failure of environmental education, and how we can fix it. While the entire article is well worth a read (it is written avoiding scientific jargon), I’ll leave you with the conclusion and an edited “seven ways to improve environmental education”. I found myself nodding to many of them.

1. Design environmental education programs that can be properly evaluated, for example, with before-after, treatment-control designs. Such approaches represent a sea change from programs today, and we expect considerable resistance from environmental educators. But the environmental community at large must stop rejecting criticism as negative and must embark on a policy of continuing self-evaluation and assessment……
2. Many environmental issues facing us today are caused by over-consumption—primarily by developed countries. Changing consumption patterns is not generally a targeted outcome of environmental education, but we believe it is one of the most important lessons that must be taught…..As countries develop, their environmental footprint may expand, and consumption control may become more important. ……..we need to radically overhaul curricula to teach the conservation of consumable products. Teaching where and how resources come from—that food, clean water, and energy do not originate from supermarkets, taps, and power points—may be an important first step.
3. We need to teach that nature is filled with nonlinear relationships, which are characterized by “tipping points” (called “phase shifts”): there may be little change in something of interest across a range of values, but above a particular threshold in a causal factor, change is rapid. For instance, ecology, which focuses on understanding the distribution and abundance of life on Earth, is a complex, nonlinear science. If environmental education is linear—in other words, if you teach that recycling one beer bottle will save “x” gallons of water—people will not have the foundation to think about linkages or nonlinear relationships. ..….. For instance, when European sailors first came to the Caribbean, sea turtles were extremely common. After intensive exploitation, turtle populations and the vital ecological roles they play have never been fully recovered. Without a historical component, these baselines will shift as we ratchet our way to inevitable ecological collapse [19].
4. We need to teach a world view. Americans know little of world history and are geographically illiterate. A 2002 poll of 18–24 year olds in nine western countries, ranked the US next to last in geographic literacy [20]. A greater appreciation of the diversity of cultures and peoples in the world should help us realize the selfish consequences of our consumption. “Not in my backyard” is not a sustainable rallying cry in an interconnected world when we are faced with global climate change. We are too late for “think globally and act locally” to work. And, contrary to the statements of President George W. Bush, the American way of life must become negotiable if it is to be sustainable. We have little trouble suffering security-related inconveniences; we should be willing to accept some inconveniences for the opportunity to live in a sustainable environment.
5. We must teach how governments work and how to effect change within a given socio-political structure. We suspect that many individuals will be offended by the thought that large industries have so much sway on the wording of state and federal legislation. We all suffer from polluted water and greenhouse gasses, but lobbyists are very effective in diluting potentially costly legislation meant to safeguard our water supplies or prevent rampant climate change. Understanding how the system works will empower subsequent generations to change it.
6. We must teach that conservation-minded legislation may deprive us of some of the goods and services that we previously enjoyed. Inexpensive airline flights make flying routine, but planes create more greenhouse gases than trains or buses [21]. Self-sacrifice will be necessary to some degree if we are to avoid or minimize adverse effects of imminent environmental threats with truly global consequences.
7. Finally, we must teach critical thinking. Environmentally aware citizens must be able to evaluate complex information and make decisions about things that we can't currently envision. True scientific literacy means that people have a conceptual tool kit that can be applied to a variety of questions. Unfortunately, much science education is not inspired, and students are required to learn facts without being given the ability to manipulate and analyze those facts. Without the ability to ask questions, identify assumptions, and make well-reasoned decisions, we're left with a population ripe for exploitation by less-than-honest industries and politicians.



Read the complete article here.

Wednesday, May 23, 2007

Details in structural biology

(Apologies to my general non-scientist readers in advance for a very specific science mini-post. And this post was written about an hour after a terribly boring NMR talk, so might be biased)

Structural biology is a major, interdisciplinary branch of biology that worries about what biological macromolecules (read: proteins, DNA and RNA) look like (or at a broader scale, what organelles look like).

Anyway, when it comes to proteins and nucleic acids (RNA and DNA), the two major methods used to study them are X-ray crystallography and Nuclear Magnetic Resonance (NMR). Both methods have their advantages and disadvantages, have revolutionized biology (starting from the structure of the DNA double helix, and the structure of hemoglobin), and there are plenty of debates on those.

But for the general biologist, those differences are meaningless. Show them a structure, and molecular insights into protein function, and they're quite happy. The details of the experiments, and all the difficulties of solving the structure are incidental, and not particularly interesting. And I think this is where the X-ray crystallographers have really understood what it's all about.

In general (and there certainly are exceptions), when speaking to a broad audience (like say a biochemistry department somewhere), most X-ray crystallographers skip through the actual experimental nitty-gritty, like the particular problems in obtaining phases, or the details of the Ramachandran plot, and go right to the structure itself, and the implications that follow it. So, at the end of the talk, the audience (which would perhaps have a small percentage of crystallographers, and a majority of diverse biologists) will go back happy, feeling like something has been learnt.

The NMR folks though don't seem to have really received this message. In all their talks, they cannot resist going into details about the NOE spectra, or unique angle restraints, or exchange of somethingortheother. More often than not, at least I don't leave the seminar having appreciated the bigger picture.

Getting your message across to a broader audience is a big aspect of science. Could this be just one reason why there are a lot more X-ray crystallographers out there than NMR spectroscopists? And does anyone else feel this is true?

Thursday, May 17, 2007

Why is a PhD this long and hard?

In most of the basic science departments in the United States (and in many other countries), a PhD takes between 5 and 7 years. That seems awfully long, doesn’t it? My five-year PhD was actually less than the average time taken to finish a PhD in my department. What’s interesting is that I’ve heard many PI’s say that it takes too long, and that “it used to take less time when we did our PhDs. I don’t know what’s changed” when in fact they are part of the “problem”. So, here are my thoughts on why it takes this long.

I don’t think it is simple. I certainly don’t think that the quality of students has decreased that much (at least in the premier research universities). The students I know have all been motivated, and have come in with some research experience (so weren’t complete newbies in the lab). They all work hard (most students work 6 days a week, 10 hours a day, juggling experiments with courses, journal clubs, assignments and whatever else). There may have been some wonderful students 20 or 30 years ago, but that probably doesn’t explain why PhDs have expanded from 3-4 years to the present 6-7 (that’s a doubling).

I would put down four broad reasons why it not takes so much longer to finish a thesis (which do overlap). The first is the structure or system itself, and the “requirements” for a PhD. Basically, there aren’t any clear expectations or requirements for what constitutes a PhD. When a student joins a program he/she isn’t quite clear on what all is required to have a thesis. Most good departments have some kind of unwritten “publication” rule, and students are expected to have at least a couple of first author publications in good journals (or one “stellar” paper). But that is a fuzzy rule, and much depends on how the projects unfold, and what their bosses themselves think is needed. I’ve known students who have graduated without a single first author paper and others who still are in grad school (even though they seem ready to graduate) after publishing some high profile papers. It’s a crapshoot. Part of it is because the expectations for the amount of data that goes into a paper has gone up, and substantially more work is required to make a complete paper. While some experiments are certainly easier due to easily available reagents, I don’t think the availability of improved reagents and tools is proportional to the amount of work that goes into a publication. A secondary factor might also be the massive increase in scientists, which has set the expectations or requirements from postdocs to be higher, and this trickles down to students as well. But a PhD is no longer only about coming up with a good hypothesis, and systematically thinking through and testing it, while acquiring a fairly thorough knowledge of the field.

The second is the more or less mandatory “rotations” that the student does before selecting a lab. The idea behind rotations is to give a student the opportunity to briefly work in 3 different labs for short periods of time, so that the student can figure out if s/he would be a good fit in the lab, if the PI wants the student, and if the research is exciting enough. However, most schools have approximately 3 month rotations (basically a full year of rotations). If the goal is only to give a student an opportunity to get a feel for a lab (without too many expectations of producing data), then 4-5 week rotations should be sufficient. Still, this is a more minor point, since I do think the rotations help students make better choices. However, there should be options for students with very clear research goals or ideas to avoid 3 mandatory rotations, particularly if they are sure they like the lab they have first rotated in, and feel it matches their goals.

A third (minor) reason PhDs drag on now is the non-research requirements that a student “has” to undertake. There is a substantial amount of mandatory coursework in the first two years, along with a few qualifier exams and whatnot thrown in. So, if you add that to the time spent in rotations etc, a student has barely done any serious experiments for about 2 years (discounting say a couple of months in summer). I actually liked courses, and took quite a few throughout my PhD, but I was able to multi-task, and get quite a bit of research done in the lab. But I do know that many students cannot focus on their research and simultaneously go through the grind of coursework and qualifiers. If I had to quantitate what this process does to the duration of a PhD, I’d say this adds another 6 months of time to the whole process (which isn’t that bad really, and can be useful if you get something out of the courses).

Which brings us to the final, and by far the most important factor that influences a PhD. The advisor. I have absolutely no idea what PhD advisors were like 25 years ago. But I do know that while there are still many good mentors out there (I had a great one), there are plenty of terrible ones. And a terrible mentor does not mean a terrible scientist (often it is quite the contrary). There certainly has been a huge explosion in the number of labs and PIs out there especially since the early nineties. Before that, there were perhaps a tenth as many scientists in research (particularly in the biological sciences). More importantly though, while I don’t know what the expectations from mentors were in the old days, I do know that there are NO real expectations from a PI as far as their graduate students go. Sure, there are department requirements and some committee meetings and suchlike. But those can be negotiated without much difficulty. There is no real incentive for investigators to actively mentor their students well, and importantly, there is no demerit if they are terrible. If an assistant professor is up for tenure, departments do look at her/his record with graduate students, and if they have managed to get out a couple of PhD students in their 5 years before tenure it is good for them. But, in most places, that is at best a secondary consideration for tenure. What matters is if the PI has managed to get a few grants, and a few high profile publications. After tenure, and particularly if the investigator is famous, there is no system in universities which really takes a look at how the students in that investigator’s lab fare. Neglecting a student isn’t noticed or penalized. As long as the investigator hasn’t done something seriously bad to the student, it’s all ok. This means a PI doesn’t really need to monitor a student’s progress, or sit down and think hard whether the student has a viable project or not, and can also tempt an investigator to make the student continue on a crazy (or dead) project far longer than they should. This is particularly true if the investigator doesn’t need to pay the student (who may be on some training grant or fellowship), or if the PI is so flush with funds that it doesn’t really matter. Finally, since there is no incentive for the investigators to get students out, they often appear to keep the students (particularly the productive ones) longer than they really need to stay, to get more out of them. That work might make the thesis thicker, but was it really needed for the thesis itself?

While this might sound like a critique of the graduate school system in the US, it is not. I have no hesitation in saying that the quality of PhD education in the US is far more substantial and comprehensive than anything else I’ve seen, particularly in the breadth of knowledge that is/can be acquired. I wouldn’t have come here if it wasn’t. However, while good, it certainly does drag on longer than it needs to (particularly for good, motivated students). Given that postdocs anyway not take much longer, at least the duration of a PhD can be kept to around 4 years. But that cannot happen if the key problem, that of the investigators themselves, isn’t tackled.

Many of the readers of this blog are (or were) grad students. What do y’all think?

Sunday, May 13, 2007

A (forgotten) history of skepticism

There’s no shortage of philosophy or religious thought that has come out of India. And, thanks to an extensive exoticization and mystification of India, religion and philosophy almost defines it. Perhaps this is not surprising, since four major world religions (Hinduism, Buddhisim, Sikhism and Jainism) originated in India, and even today religion is visibly everywhere there. However, there has also been a strong tradition of heterodox thought in India. Amartya Sen, in his extremely engaging book The argumentative Indian, lucidly describes a tradition of heterodox beliefs and debate in Indian thought, including in religious philosophy. His goal, in his book, is to outline the long history of vigorous debate and argument India, with a rich tradition of heterodox beliefs. In those writings, Sen briefly describes (without delving into) a rich Indian tradition of skepticism, agnostic and atheistic belief within the various Indian religious traditions.

However, India today is arguably religious in a more traditional sense, which includes a narrower definition of being theist, and believing in a god or a creator across the various religious denominations. While different religious beliefs are thriving in India, the strong traditions of skeptic, agnostic or atheistic beliefs that once co-existed in India are barely visible. In this post, I wanted to describe the strong tradition of skepticism that existed in Indian religious thought, the demise of such thought, and the roles they may have had on early Indian thought, and the development of science.

Hinduism has had an extremely wide range of beliefs (unsurprising, since it is not a “codified” religion defined by a single book or belief), ranging from polytheism to monotheism, to prominent elements of monism (particularly in Vedantic thought), to outright skepticism, agnosticism and committed atheism. For example, Sen points towards the Rig veda, which goes back to the second millennium BCE, and allows doubt even in a creator and creation. The Nasadiya sukta ends by asking if creation itself has arisen, or formed itself, or perhaps it did not, and only the one in the highest heaven knows, or perhaps he does not know. The Indian epics (the Mahabharata and the Ramayana) have doubters and skeptics who constantly raise their questions (though, in these cases, they are eventually overruled). The most prominent atheist voices in Hinduism were from the followers of the Lokayata and Carvaka schools of thought, who denied any existence of a god, and said there was nothing after death, soundly rejecting an afterlife. You are born, you live, and you die, that’s it. This school of thought suggested direct perception as a method to prove truth, and not speculative reasoning.

Buddism, the second major religion of India, is doggedly agnostic. The questions of creation or a creator are left firmly in the realm of the unknown, among the fourteen unanswerable questions. The Buddha remained silent on any questions on god or creation. It was not important to him or for anyone who had been “liberated” to shackle themselves with those questions, as such questions would only lead to dogma. It was important (especially in early Buddhist tradition) to question and reject orthodox dogma, and debate and questioning played (and still play) central roles in Buddhism. Buddhism also played a very important role in India (and in China and the East where Buddhism spread) in spreading education, as Buddhist texts were widely translated and printed, and made available to all who wanted to read it (without restrictions of specific classes). Widespread education is an important starting point for the emergence of diverse ideas including heterodox thought, and skepticism is but one aspect of heterodox thought.

Jainism is perhaps unique amongst Indian religions in that it is clearly and strongly atheistic. It denies the existence of any creator or god, and states that the universe is timeless, and it functions according to natural laws. Alexander the great is believed to have met and been suitably impressed by the thoughts of the Indian gymnosophists (Samana/Sramana or Jain ascetics of the digambara (“sky clad”) sects), though of course, he probably went back to his godly ways and built temples to Zeus or suchlike after being impressed.

Collectively, there was plenty of space within Indian traditions for agnostics, skeptics and atheists. While this is not necessarily causal for heterodox ideas, the prevalence of such thought will encourage heterodox thoughts and ideas. There is a strong coincidence of the strongest prevalence of such thought in India with the time of the most substantial output of Indian science, astronomy and mathematics. For example, during the so-called Greek and Arab periods of mathematics (from the 6th century BCE to the 14th century) there were a substantial contributions by Indian mathematicians and astronomers. This was approximately a time when heterodox beliefs were prominent in India, where there were strong Buddhist (and Jain) traditions till up to at least the 11th century, and followers of the Carvaka thought are known to have existed till at least the 14th century (when they are known to have attended the moghul emperor Akbar’s interfaith gatherings). The importance of skeptic, unorthodox thought in the progression of science and discovery cannot be understated. For example, the 5th century astronomer, Aryabhata had numerous discoveries in mathematics and algebra (which were not against any dogma), but his theories in astronomy were revolutionary. He suggested that the earth moved and the heavens were still (consistent with a heliocentric model of the planets) which was quite radical, contradicting the orthodox belief that the sun god in his chariot went around the earth every day. But, given the prevalence of non-orthodox belief, Aryabhata was not thrown into some dungeon, but remained a celebrated court philosopher, as did his disciples and future followers of his thought, like the celebrated Bhaskara and Varahamihira, who continued to use and refine Aryabhata’s methods to estimate eclipses, and propose theories contradicting orthodox religious thought.

Buddhism’s pragmatic, “middle-road” approach as well as a visible skeptic school of thought in Hinduism appear to have played important roles in the acceptance of such thought. Jainism preached a path of extreme austerity and renunciation, which would not have encouraged more material quests for knowledge of the world, but the strong tradition of atheism undoubtedly played an important role in combating belief in the supernatural and superstition. However, by around the 14th century, most atheistic thought in Hinduism (and followers of the nastika schools of thought) had demised. In fact, there are few records of the Carvaka and Lokayata schools of thought, and most of the existing records come from other (still prevalent and better preserved) orthodox Hindu philosophies (including from commentaries of various vedantic scholars including Madhva), all of which have biased or incomplete records of atheistic faiths, as the goal is to prove the atheist faiths wrong. Atheist schools of Hinduism are markedly absent in modern Hinduism, and Hinduism itself is being viewed through narrower, less diverse prisms. Buddhism is no longer a major religion in India, and anyway, current forms of Buddhism is (for all practical purposes) apparently more theist (with worship of the Buddha having a very central role), with less room for questioning faith. Similarly, Jainism, while still very alive, is a more minor faith, and superficially relatively indistinguishable from Hinduism (with ritualized worship of the Jain tirthankaras, and significant ritualized dogma).

So, numerous questions arise. Why did this tradition of skepticism die out? How can a visible space for skeptics, agnostics or atheists be recreated in Indian society, particularly in the public sphere (particularly outside of the communists, who in India at least, are ironically entwined with religious groups for electoral reasons)? Why isn’t this rich history of skeptical thought taught as a part of social studies in schools in India?

Food for thought, that.

Thursday, May 10, 2007

A couple of announcements

I know that the frequency of my posts have decreased a bit, especially over the past couple of months. This is largely an effect of working longer hours in the evenings, followed by some intense basketball games, which leaves little energy to blog at night. However, I do try to maintain a frequency of a couple of posts a week, with at least one being more substantive. Anyway, this might continue for the next couple of months, where I will have one comprehensive post a week, with a couple of shorter ones thrown in for fun. So, I encourage you all to sign up for my feeds, which should tell you every time there is a new post. Or you could use the email subscription button on the right sidebar, and subscribe to posts from balancing life. The earlier glitches have been fixed, and it works quite well now, delivering the latest post to your email id. That way, you can always keep up with the latest on this blog, with little pain. Do come back to comment though, since the comments are usually great fun.

************

Now for a bit of news. Arunn at nonoscience has revived the blog carnival Panta rei, where all science flows. In its new avatar, Panta rei will not be just on fluid mechanics and thermodynamics (as it used to be), but will now have different editions for different topics, and will appear every other week. The first edition of the new Panta rei will be on May 14th, and will focus on chemical sciences. I think it is a great addition to the many great science carnivals (linked on the right sidebar), and I particularly hope to see more Indian science bloggers.

Go forth and participate.

Monday, May 07, 2007

What’s the purpose of studying this?

I’m often asked by (my non-scientist) friends or family what exactly I study. My answers vary in sophistication, from something outrageous (“I’m going to cure cancer in a few months”) to something more specific. Sometimes, some friends are more curious, and want to know gory details. So, I provide them.

Sometimes they understand and nod their heads acceptingly as I talk about basic research and looking for new mechanisms in basic biology. But invariably the question pops up “but what’s the use of studying this? Is it for making some drug or something? Otherwise, why study it?”.

My usual response would vary from describing the philosophies of science and the quests for new discoveries, to warm and fuzzy statements on the importance of furthering knowledge which may not be applicable for anything right now, but may be useful years down the line. Some fields of science are easy for people to wrap their minds around, and decide it is important. In others, it is not as obvious.

These statements are usually met with blank stares, skepticism, or, more often, a pitying nod that would translate to “you poor fool, why don’t you do something useful”. My friends who work in software or hardware technologies are particularly harsh, since they are used to pressing deadlines and bringing out and shipping products in finite time spans of months at most. They persist with questions like “so, will the stuff you’re doing be useful in 5 years time? Ten years? If you don’t know, why don’t you work on something more pressing, important and useful, like actually trying to cure cancer?”

Ah, well, in order to “cure” a disease, you need to actually know enough about it in order to do something about it. And finding out enough (where enough is so relative) to do something about it means you need to poke around asking different questions, which will all lead you down diverse paths. Most of them will not give you what you are looking for, but the knowledge you’ve gained will open up new fields and those may result in important discoveries that benefit mankind. What’s more, some of the biggest breakthroughs about basic concepts in biology, that have gone on to have a huge impact on human medicine have come not from studying human problems, but by studying yeast, or flies, or frogs, or worms, or mice.

The joy is in the quest for knowledge. As new knowledge is gained, applications for that knowledge will evolve on its own.

Saturday, May 05, 2007

If only it was so easy

I was chatting with my dad, who is among the world's most optimistic people. Somewhere along the conversation, he says (mostly in humorous encouragement) "so, somehow you've managed to get your Ph.D., now the next thing you should get is a nobel prize."

(That sounded much better and a whole lot funnier in the original Tamil).

If only life were that simple.

*************

Speaking of simple, it's amazing how PI's (for those of you who are not in science, that means the professors for whom graduate students or us lowly postdocs work for) effortlessly come up with these incredibly complex and elaborate experiments. Most of them have been away from the bench for so long that they no longer realize how much time and effort some experiments take. In their own time-warp, experiments are childishly simple.

So, it is simplicity in itself for them to suggest, say, the purification of a non-recombinant protein from cells, using a "simple" chromatographic separation (involving different types of columns), following the protein with western blots alone, collecting hundreds of fractions, and finally topping it all of with a few hundred activity assays, all of which can be done quite easily (in their minds) in about three days, give or take a few hours. (Something like that would take well over a week to do, and to get it all right could take up to a month).

If only life were that simple.

Tuesday, May 01, 2007

Stimulating the mind and the bowel

The long, tedious and (hopefully) sometimes humorous previous post reminded me of something I saw on the history channel. Now, the average quality of programs on the history channel varies from shoddy, bad, pandering, condescending to outright terrible. However, every now and then, they stun me by showing something that’s well researched, informative and full of interesting trivia.

Today we take toilet paper (yes, tp) for granted, and watch smiling animated bears selling us stuff graded by softness and absorptive capacity. But, think about it, there clearly wasn’t any tp not that long ago, before mass-manufacturing and such like. A lot of people in various parts of the world use good old water to wash their backsides, but what did the rest of the world do? Well, the cleaning material of choice was leaves (the softer the better) or clay pellets for many hundreds of years. Rumor (and the history channel) has it that the Chinese, after inventing paper millennia ago, started using paper for the all important job somewhere in the 12th century.

But the best seems to have been reserved for the early 1900’s, particularly in the United States. Restrooms would be equipped with a choice selection of the latest as well as classic old magazines. The user would then spend a relaxing few minutes relieving oneself, while stimulating the mind with the latest and best in 1910’s fashion, horse-carriages, the model T Ford, geography, cattle-farming or whatever else. Once the job had been successfully negotiated, all that person had to do was to tear off the appropriate pages, complete the required ablutions, leave the remaining pages for the next user of the restroom, and leave.

The ubiquitous and inexpensive availability of TP by the 1940s made the need to leave behind a good supply of catalogs, magazines and newspaper in restrooms needless, and the rest, as they say, is history.

So, here’s a terrific market opportunity for the eager entrepreneur who can put Charmin out of business. Partner with a cartoonist (Jim Davis or Gary Larson would be my recommendations). Print out a series of cartoons on the TP. Heck, you could even print out an entire graphic novel, page by page, on each sheet of TP (hear that, Frank Miller?).

Sell them, make millions, and make millions of people smile every time they use a restroom.

Take that, smiling Charmin bears.