Monday, September 05, 2016

FlowPy: from code to software

Cheers!! FlowPy, the software that we developed, got  three citations in published papers (PMID: 26116575, PMID: 27253695 & 27084942). 

It started as a lazy B. Tech project. My Ph. D student was using flow cytometry. We had good data. We wanted to use some advanced statistical analysis of that. Unfortunately, those tools are only available in commercial flow cytometry software. And to my utter surprise, our flow cytometer does not allow us to export raw data as plain-text. So we were trapped. We can not use any advanced statistical tools on our data.

So Tejas was entrusted to develop a software that would extract data from flow Cytometery files and export that as plain-text for further analysis. It was his B. Tech project. He wrote it in Python. We named it FlowPy.

At that time, I had no idea about software development. Software development is quite different from writing some codes for personal academic use. I had no idea about web-hosting of software, github, version control, GNU licence. We had no funding for the work either.

We were just happy to write the code, use it and share with others. So I posted the stuff at WikiDot, a blogging site. It is still there:

Over the years, other B. Tech students joined the team. Two students, M. V. S. R. Sastry, and Revanth Sai Kumar made most contributions. From a mere Python script, FlowPy graduated to a GUI-based software. It is no more a mere data extraction tool. It can also perform various statistical analysis and visualizations. 

We never published any paper on FlowPy. Just let it sleep at its WikiDot site. The information sipped slowly through the Web. Several Web lists of Flow Cytometry software listed FlowPy. Occasionally, I got queries on bug in our code. 

Slowly we were losing interest in FlowPy. For last one year, we stopped further development of FlowPy. But then came the bravo moment!! I realized some of our colleagues across the globe are using it successfully and two papers cited it. Unfortunately, WikiDot does not provide file download data. But Google Analytics tells me that we do get regular traffic. That too from all over the globe. 

Publishing a software is quite different from publishing a paper. A software is not a mere collection of information. It is a product: a product that has to be stable, reliable and works as promised. It has to work every time a user uses it. And we don't know the users.

That's why we are thrilled to see that some of our fellow scientists are finding FlowPy useful for their work. FlowPy is still buggy. It still has ample rooms for improvements.

We have again started development of FlowPy. Hope to remove the bugs and increase number of  in-built tools. If you use Flow Cytometry, give FlowPy a try and let us know your experience and expectations.

Happy Flow!

(Updated on 30/9/2016)

Sites that list FlowPy:


Friday, August 26, 2016

Reject, minor, major revision and the fourth option

Once a reviewer of one of our papers wrote that she/he did not understand our mathematical model. That's a honest submission. No one expect that every one will understand everything. But had not that affected the decision made on our paper? May be. May be not.

But every reviewer faces this problem. As science is getting more and more interdisciplinary, one often find some part of the paper bit difficult to understand and review. I am not talking of complete ignorance. Neither talking of a badly written paper. Am talking of a situation where you broadly understand the concepts and issues, but lacks clarity on particulars in that paper. The best option then is to ask the authors to explain and help you understand their paper better.

Once you have understood the paper, with clarity, then only you can make a rational judgment on the paper. Isn't that obvious? But not in practise. Journals does not allow you to post queries or make comments on a manuscript without making a judgment out of three choices: minor revision, major revision, or reject. 

There is no scope of a dialogue, albeit with anonymity, between the people who did the science and those who did the vetting. Yes, there exist the practise of post-review rebuttal. But that's only after the reviewer has made the decision. 

The purpose of publishing scientific papers has changed with time. So has changed the practise and culture of peer-review. Journal editors complain of shortage in serious reviewers, authors complain of lackluster reviews,  reviewers complain of lack of professional incentive in reviewing papers.

Even then, there are people who review each others papers and do that with all earnest. They still believe in the elementary purpose of peer-review of a scientific paper: to improve the manuscript and to improve the work reported there.

Won't it be wiser to help this lot scientists to do the job better? One step towards better review would be to provide a fourth option to a reviewer. Let the reviewers post questions or start a thread of discussion with the authors, before they decide on the manuscript.

Obviously, such interactions would be considered as part of the review documents and has to bounded by a specific duration. It would also be bounded by all legal and ethical guidelines of peer-review.

Am not sure, how many of my peers will use this option. But letting some use it judiciously, wont harm science, but make it better.

Saturday, May 14, 2016

Everybody loves an anti-cancer drug

There are over 19000 papers, published till date, with the word anti-cancer in title or abstract. Over the years, funding for research on newer anti-cancer drugs has increased. So is the publications with this phrase (See the figure below). This phrase also have some magical power. It helps me to easily justify my research and grab a slice of funding pie. Unfortunately, the pie is never enough for all. 

Unfortunately, we are still far from wining the disease. 

Plot showing trend in publication of papers on "anti-cancer". Pubmed was searched for all the papers having the phrase "anti-cancer" either in abstract or in title. The numbers in parenthesis show the year of first report.

Working for drugs against cancer has some technical advantages over other diseases. Think about developing a new drug for an infectious disease, like Dengue or Malaria. It’s difficult to have a good in vitro model for many infectious diseases. When you have one, you need special laboratory facility and legal clearances to work on those. 

Cancer research has no such troubles; at least at the early stages of the project. Most of the in vitro assays are performed on cell lines. HeLa was the first human cell line, reported in 1952. Since then cell lines are the workhorse of anti-cancer drug development. These cells are treated with a drug and its ability to kill these cells is measured. Some time the drug does not kill the cell but just stops the cell division. That’s good enough for us. Measuring such cytotoxic or cytostatic effect of a drug is not so difficult. We have many cheap and reliable assays for this. One such is the famous MTT assay.

These experiments are simple, cheaper and you don’t have much legal and ethical issues. For a scientist, these are critical determinants. Social priority, science policy, academic fashion and ease of preliminary experiments, all these are behind the exponential growth in publications on anti-cancer agents.

But what is an anti-cancer agents? A search through the Pubmed throws up curious mix of items: plant extracts, nanomaterials to  atmospheric gas plasma. Most of these studies involve some form of in vitro cell culture-based experiment to show that these materials kill the cells, preferably through apoptosis. Essentially, the authors are checking cytotoxicity of these agents. Interestingly, some of these materials are also toxic to bacteria and often promoted as bactericidal agents, albeit in separate papers.

Most of these anticancer agents never makes to next step of evaluation. No body chase them further, not even the inventors. Authors move to another project, on another anti-cancer drug. Another paper is minted with the same key word. 

Cancer is a cellular disease. It is a disease with cells having genetic, epigenetic and phenotypic changes. To treat, either we have to convert these cells back to normal  or we have to get rid of them. For the time being, the first one seems improbable and our focus is on the other option.

In 1947, Sydney Farber used the same principle, when he used  aminopterin to treat children with leukemia. Aminopterin stops cancer by blocking cell division. Chemotherapeutic agents, developed subsequently, have the same property. They block proliferation of human cells through diverse mechanisms. Blockage of proliferation hits cancer cells and any other rapidly dividing normal cells. So, these drugs have some sort of inbuilt specificity: they block cell division and affect dividing cells more than those seating idle. 

However, many anti-cancer agents reported in academic literature do not follow the same logic. Something can be cytotoxic for different reasons. It may kill cells by blocking essential  processes like protein production. Cells can be killed by forming pores on the membrane or by oxidative damage. Many so called anti-cancer agents kill cells by these mechanisms. These methods have no specificity towards cancer cells and would affect every other cells in body. Even then, authors call those as anti-cancer agents. 

In fact. we really don’t have shortage of such non-specific cytotoxic or cytostatic agents. I will say, we have enough of such arsenals; enough to stop further search for new one. The focus should be more on developing strategies to deliver those specifically to cancer cells, sparing the normal one.

Academic research has its own dynamics. Some works on basic “blue sky” questions on how nature works. Others prefer to work on issues that has immediate social relevance. Discovery of a new cancer drug would have immediate social impact. Many of us may have such high goal, but we are mostly lost in closed alleys. 

Drug development is always an uncertain endeavor. Something that worked well in vitro may fail miserably in animal experiments or in clinical trials. Even then, our efforts should start with clear logic. Our strategy should have clear rationality based on our existing knowledge of other ant-cancer drugs. Unfortunately "logic" is loosing to the rush to get published. It is loosing to the fashion in academics. 

As the rogue cells keeps dividing within millions of people, we keep trying new methods to checkmate them. We keep trying, often, even without rationality. And the printing press churn out "Anti-cancer" in black and white. 

Monday, February 15, 2016

The Hidden Data: let's share & recycle it

Riding the success of Open Access, demand for Open Data is gaining momentum. Scientists do experiments, collect data, analyse those and draw conclusions. However, a scientific publication reflects all these processes only in brief. Methods of experiments and data collection get the least space in a paper. Often, this leaves most of the crucial steps of an experiment to the imagination of readers. Raw data metamorphose into graphs and images. Statistical analyses prove their existence only in some stars, somewhere on the graphs.

The main focus of a scientific article is story telling. Just like a film director, a scientist directs the readers through the paper in a chosen way. You learn the story that the author wants to tell you.

No, am not questioning the integrity of authors. They must have valid and honest stories to share. But that does not exclude the possibility of multiple other stories hidden in the data. Those can be unearth only when you allow every one to look into it; when you allow every one to think over your observations. After all science is communal.

Scientific data, particularly in Biology, comes in different forms. They can be images, videos, numerical values recorded in spreadsheets,  sounds collected from field, and even living organisms. With that diversity, comes the volume of the data. Obviously, you can not share such multitude of information using the pint media.

Thanks to digital media! Now we can store, and share different types of data easily (with only exception being living data). With enormous developments in data storage and cloud computing, we have no excuse to hide our data in lab notebooks and desktops.

As the demand for openness has increased, journals, likes PLoS, Science, Nature are now promoting data sharing to different extent. Some funding agencies also have  mandated such data sharing. However, there is no consensus across the board and many are raising apprehensions of misappropriation of raw data.

However, the debate mostly revolves around discloser of data from large scale studies, like clinical trails. But small scale experiments, performed everyday by most labs, also have the same fate. The observations are cherry picked, arranged, and then packaged in suitable graphical forms to present to the peers.

Say, you want to show that a drug inhibits Insulin signaling. For this, you need to identify the correct doses of insulin, the drug and the required treatment time. Therefore, experiments are performed to identify optimal doses and time. Eventually, you will perform an experiment at those optimum conditions. Observations of this experiment would be presented in a graphical form to substantiate your claim. For your story, elaborate dose and time dependent observations are not crucial and those are lost somewhere in your lab records.

But that hidden data may be crucial for someone working on the kinetics of Insulin signaling. Although you have already done the experiment, they have to do it again.

Even when such data are published, those are mostly in graphical forms. Graphs are good to communicate ideas and conclusions. But those are not suitable for data reuse. I can not get the exact numerical values of measured variables from a graph. Often, raw data are transformed before plotting (remember % cell viability of MTT experiment). Without adequate information, it is impossible to get back to the original values. Data exists wide and open, but we can not reuse it.

This is a frequent problem, faced by people in mathematical biology. Experimental observations are abundant in Biology. But most of the published data is not suitable for use in mathematical modeling. There are several free tools, like DataThief, Graph Data Extractor, and Web Plot Digitizer, that can extract numerical data from graphs.  These are very easy to use. But quality of extracfed data depends on quality, resolution and size of images of graphs. Even at their best, these extraction tools can provide you only approximate values.

Data extractors are useful but are not solution to the problem of hidden and lost data. Best solution is to store all of our observations in freely accessible repositories. 

There exists many data bases for storage and sharing of structured data, like sequence information, protein crystallographic data, microarray data. However, most of us do not use these services religiously.

The trouble is more for unstructured data, say all of our western blot images or cytotoxicity data of drugs that we are testing on mammalian cells. No body shares raw data of those experiments. Thankfully, several web services, like Figshare, have started to store unstructured data too. The best is that they provide a DOI for everything stored there. Therefore, those are identifiable, and citable so that you get due credit for your data.

As an individual scientists, we may start with baby steps.  Once a paper is published, one can share the data of published results and unpublished background results through such cloud services. Definitely, it would require time and efforts, to clean and structure the data before sharing. As a community, we have to encourage and appreciate such efforts.

However, I wonder how this model of cloud storage of data can sustain without financial support from funding agencies and academic institutions. Experimental biologist are churning out enormous amount of data at every moment. To store those for eternity, in a publicly accessible repository, requires enormous financial support.

When I get a grant, it pays for my reagents, and instruments.  It also pays for lab stationary like lab notebooks, where I record my observations. Such grants should also cover the cost of storing those observation, for the future.

There comes the requirement public digital repositories for data. Am sure, as the campaign for Open Data spreads, major funding agencies across the globe will chip in for such public repositories. Beyond science, it makes economic sense too.

In India, the idea and ideals of Open Access is slowly sipping in. DST and DBT has created repositories for papers published through their funding. Many institutions, like  IITs, have created publicly available digital repository for theses and similar documents. Recently, work on a National Digital Library has started to integrate all such repositories. Hope that the scientific community and policy makers would soon realise the importance of data repository.

Till then, let us share our data, codes, software through what ever means we have. Let us reuse and recycle every bit of information.

Sunday, February 07, 2016

Refresh MathBio 101 with Zika

By now, you must have got introduced to Zika. May have also heard the heart-wrenching stories of children born with small head. They call it Microcephaly. It is probably connected to Zika virus infection of pregnant mothers.  The epidemic of Zika virus is causing havoc in some places in south america. WHO has recently declared it as a Public Health Emergency of International Concern (PHEIC). 

Doctorsscientists, public health workers, across the globe, are working hard to contain the disease and to develop vaccine and drugs against it. The conspiracy theorists are also working hard. Am sure you have read articles, flooding the social media, connecting Zika epidemic with greedy pharma companies. Conspiracy or not, you must have thought that how come all of a sudden this virus is causing such a havoc. It seems, as if it appeared from thin air and is spreading like an avalanche

But there is nothing unusual about it. Epidemics spreads like that and mathematical models of epidemics explain such avalanche. Let us check one of the simple mathematical models of epidemic to understand the Zika epidemic. This model is taught in the introductory course in Mathematical Biology. Let's refresh MathBio 101.

                                                        Image source: BBC

An infectious disease spreads through contact between an infected and an uninfected person. In some cases, the contact may not be direct. For Zika, the contact is through  mosquito. Whatever be the mode of transfer, to spread the diseases there must be some infected people in the community. Size of that infected population may be very small; but epidemic can not start from zero. 

Zika was reported, first time, in a paper published in 1952. It was the first report of isolation of this virus from rhesus monkey caged in the canopy of Zika Forest of Uganda.There were subsequent sporadic reports of  human infection by this virus across the globe. In 2015, there were reports of Zika infection in Brazil, the center of current crisis. So, there was already an infected population with potential to spread it to susceptible people. 

Let us call the fraction of the population with infection as I. The fraction of susceptible people who are still not infected be S.  The disease spreads from I to S and size of the infected population (I) increases. With time, I can decrease, as some of the infected people recover, develop immunity or (sadly) die. Let us name that fraction of population that have recovered or died as R.  

As the infection spreads, with time, sizes of these three populations change. We can write three Ordinary Differential Equations (ODE) to capture this population dynamics. 

Remember, here, S + I + R = 1 and people who have recovered (R) do not get infected again.

This is called SIR model of epidemics. It is a generic model. We have not considered any particular mechanisms of spread of infection or any particular means of disease remission. It relies on the simple idea that infection spreads by interaction between susceptible and infected people and some people either die or recovers. 

We can simulate these model by numerical integration of these three ODEs. For that we require numerical values for the constant terms a and b. In this model, a, grossly, represents how frequently one susceptible person gets infected when he/she comes in contact with an infected one. For Zika, this would depend upon mosquito, their numbers, their behavior and also on the behavior of the virus. 

The other constant b, represent how frequently infected people either die or recovers. Again this will depend upon health of individual, condition of their immune system, existence of healthcare facilities and also on the virus. 

For simulation, let us take a = 0.2 and b = 0.05. We also have to specify, values of S, I and R at the beginning (t = 0). Say those are, S = 0.999; I = 0.001 (very few people are infected) and  R = 0. 

We have simulated the SIR model with these values. The results are shown here in the figure. 

Initially, most of the people are uninfected. With time, number of infected people increases exponentially. Some of the infected people either die or recover. Therefore, size of the infected population, I, can not increase forever. It reaches a peak and then start falling. Remember, those who die or recover do not get infected again. As R increases, there is less and less people, left to be infected and the disease stop spreading further. 

Check the second ODE (eq. 2) carefully. It represents, rate of change of I with time. 
When, a = b/S, 
dI/dt = (b/S).S.I - b.I = 0. 
That means, when a = b/S, the infection will not spread. 

Suppose, some people are infected with the virus but size of the infected population is very small. Say I = 0.001 and S = 0.999. Like the previous simulation, consider b = 0.05. So, b/S = 0.05005

For some reason (may be weather), the constant a is very low. Say a = 0.05005. This makes a = b/S. Therefore, dI/dt will be zero and the size of the infected population will not change with time. Though there is circulation of the infection in the community, it will not become an epidemic. 

Suppose, after 100 days, something happens; like mosquito population increases enormously due to a change in weather. This will change the constant a. Now, say a = 0.3. As a becomes greater than b/S, infection will start spreading very fast and the size of the infected population I will increase exponentially. 

Now, we have a full blown epidemic. Eventually, it will recede as people will recover, get immuned or die. This dynamics of sudden appearance of epidemics is shown in the following figure. Here, till 100 days (shown by arrow), a = 0.05005. After that a is changed to 0.3. 

This is one of the simple models for epidemics. This may not correctly explain the current Zika crisis. There are many complicated models for epidemics. Some are disease specific and consider finer details. 

Even then, this simple model explains, how all of a sudden an epidemic can start like an avalanche. This also explain how common precautions, like using mosquito net or vaccination reduces chance of an epidemic. All these steps reduce the value of the constant a. As long as we keep a less or equal to b/S, we are safe.  

Update: Very recently a paper is published that has modeled the transmission dynamics of Zika virus in French Polynesia. There was an outbreak of Zika in these islands in 2013-14. 

They have used a mathematical model very similar to the SIR. Only, this model is bit elaborate. 

The model includes the dynamics of mosquito population. There is a susceptible mosquito population (Sv) that can get the virus from infectious people (IH). Once they have the virus, we call them exposed (Ev). Some of these exposed mosquitoes become infectious (Iv) and infect susceptible human (SH).  

They have also included a human population (EH) that is exposed to Zika through mosquito bite but the infection is in latent stage.

This is called susceptible-exposed-infectious-removed (SEIR) model. Just like the SIR model, ODEs are used to model the dynamics of all the populations. Here, we have seven different populations. So they have used seven ODEs. They have another additional ODE, for cumulative number of infected people. 

For details, look into the paper. It is freely available at Biorxiv. 

By fitting the model to the population data of the out-break, they have made an interesting prediction. Suppose, people, recovered from infection, get life-long natural immunity to Zika. In such case, the model predicts that  it would take at least a decade before re-invasion of Zika in this island population. Some relief for the health workers!! 

Sunday, January 31, 2016

Teaching Biology Differently: Teaching The Design Principles

I used to hate biology in school and even in college. I used to hate it for all those difficult-to-pronounce names and lengthy descriptions. Eventually, my skills in drawing and storytelling, helped to sail through biology examinations. 

Ask a student of biology 101 class, which is compulsory for all our undergraduate students, you will get similar answer.

We all love physics for its laws and principles. Math is our darling as it gives the power to understand a phenomenon magically by some equations. It is logic in its purest form. Biology, as the books and teachers present, does not have any law, rule or principle. You just observe a phenomenon and accept it as fact, as it is. Read it; remember it. Molecular level biology, at the college level, is often taught in the same fashion; only the level of observations changes.

Is it true that biology is nothing but a compilation of information? Is it really devoid of any underlying principles? Or we are teaching biology in a wrong way?

Modern biology evolved from natural history, the art of observing and recording nature. Once, biology was like astronomy: you can observe but cannot manipulate the objects that you are studying. However, modern biology gives us tools to manipulate and interrogate living things.

Even when you merely observe, you can draw generalized principles. The heliocentric theory of our solar system was not developed by manipulating sun and planets. It was developed by observation, mathematical calculations and rational imagination. In fact, the theory of evolution proposed by Darwin was developed in the same fashion, by systematic observation and logical deductions. While teaching physics, we starts with the heliocentric theory and theory of gravitation; rather than teaching list of names of universes, stars and their planets. Why can't we follow the same in teaching biology?

But are there any principles or laws in biology? Biology deals with living beings and they follow laws of nature that are equally applicable to inanimate and animates. We learn those in physics and chemistry (that again is an extension of physics). Something living cannot violate those laws. Whatever a living being does, from birth to death, must be following those laws of nature; either we know them or some may be still unknown to us.
Some of my biologist friends will not be happy with this. They will protest that I am equating biology with physics. Trust me, I’m not.

Living things are definitely more complicated than a ball rolling on a ramp, as we have studied in physics textbooks. So are weather, geology, hydrology etc. Most natural systems are much more complicated than the pendulum you used to calculate acceleration due to gravity. Be it living or nonliving. Phenomena observed in these complicated systems are not easy to explain by just few simple equations. (At least not till now!!)

Also living systems are very diverse. Same thing can be achieved by multiple strategies, without violating laws of nature. That is where biology becomes difficult for students. Teachers often over-emphasizes on the diversity, not on the unifying principles. That is where the concepts of design principles help.

Imagine yourself as a designer. You want to design something tangible, having some specific properties and functions. As a designer, you have to think different ways to design the same. Those designs will have advantages and disadvantages. Above all, as the system is real and physical, none of the designs should violate the laws of nature. So your options in design, are bounded by those laws.

Same is true for biology. While studying biology we can look from the perspective of a designer. You have some target to achieve and you have some basic building blocks in hand: molecules, cells, tissues etc. Each of these building blocks has properties. How will you design the system?

Let us take the case of immune system. The objective is to create a defense system against ‘others’. How will you go for it?

First, you have to define the border. Then you have to create the first line of defense at the border; something robust. That is where comes your innate immune system. Making the system more sophisticated, you have multiple tires of soldiers and officers, having different capabilities. That is how you have different immune cells.

You must have a system to check foreigners and keep valid citizens safe. You need some sort of passport with identification stamps. That is achieved through self-nonself discrimination and immune memory. You must also have a spies snooping around for invaders. So you use cells like macrophages.

Like a modern army, you want to create a tight command system by segregating people with different skills and responsibilities. So comes your different immune cells with different capabilities and their interactions to control each other's activities. You do not want your boys to move around freely with loaded arms. That is why you create cantonments, the lymphatic organs, where you keep your soldiers.

When you have a bomb that causes collateral damage, you do not trust to put the trigger in the hand of only one person. You make sure that at least two people agree to push the trigger. That is what they do for atom bombs. And that’s why we have “two signal” system for immune response.

Now put all these design principles together. Introduce the molecules, cells and others, in this context of defense design. Along with these, introduce the students to chemistry, kinetics, and thermodynamics of molecular recognition, diffusion limits of molecular signaling, mechanics of cell migration. Even one can introduce students to stochastic processes like diversification of B-cell repertoire.
With these, students will realize how the immune design is constrained by laws of nature. Biology will be connected to physics, chemistry, and math. It will be easy for them to comprehend and appreciate biology.

Yes, one have to know what is a B-cell and how it differs from T-cell. But I will not bug my students to remember names of all the molecules and cells. Rather, I will focus on the bare minimum ones and emphasize more on the design. In fact, one can comprehend the principle of immune memory and vaccination, even without remembering all the different variants of B-cells and molecules involved.

Someone who will eventually work in the field of biology, say during his//her PhD, will learn those details in time. They will mostly learn the finer things while working on those. For rest, let us focus more on the principles.

For rest, let us instigate their imaginations with design problems. For example, after basics of immunology, one can ask the student to think about design principles for immune tolerance in pregnancy. An embryo is genetically different from mother. Mother's immune system should consider it as foreign. How will you design a safety net, to save the embryo from the attack of mother's immune system? This way, we will be able to instigate the students to think about an active field of research.

One can use this approach of teaching design principles in other topics of biology. Let it be basics of molecular biology, metabolism, or signal transduction. We can shift the focus from “what happens in biology” to “why they happen that way”. I call it teaching the design principles.

It also helps to connect biology with physics and chemistry. Even to engineering. It helps to introduce mathematics in biology. Above all, it instigates students to ask questions and learn. This approach is particularly helpful for a heterogeneous class, with students from different disciplines.

Over the years, I have practiced this. I have always got positive response. It allows me to break the first barrier, to tantalize students to know more. Once they are hooked, you can insist them to remember those names and information.

1) This writing is focused primarily on teaching undergraduate students; not for teaching students having higher and specialized study in biology.

2) There are amazing teachers, all around, who teach biology in their own ways. Opinion given here in this writing is NOT the only way to teach biology.

Sunday, January 24, 2016

Anti-social media

Winter is closing. The primary conference season, here in India, is almost over by now. Conferences are for academic socialisation. You make new contacts, refresh the old ones, exchange email IDs, occasionally discuss science, and obviously do lots of bitching on lack  of research grants, bureaucratic red-tapes, politics of award committees etc. (For those lesser morals, not in academics, I suggest to read Small World: An Academic Romance by David Lodge to get an idea). Whatever it is, conferences in India are lively. They serve good foods and are crowded  and noisy to make you feel living.

Another thing that make you feel living and kicking is social media, from Facebook to Tweeter. These are also social activities, just virtual. Social media is slowly becoming part of academics. Social media can be used to champion popular science, to share ideas and information with fellow scientists. It is an excellent medium to debate over science policies. (If not impressed with my words, you may read this post to know why a scientist should use social media). 

Institutions, across the globe, are now using Tweeter and Facebook to serve their news to a larger audience, including media. Funding agencies also do so. Social media is often used to promote conferences and meetings. Individual researchers post their works. Services like ResearchGate and others are trying to build social networks exclusively for scientists. They are promoting social media to discuss science with all its nuts and bolts. Even then, most of the scientists are still not using these online tools for academics. 

Situation is far grimmer in India. The present government is promoting use of social media to interact with its citizens. But unfortunately, only a handful of academic and research institutes use social media to engage with the public and media. Individual scientists rarely use social media to interact with their peers. Strangely, many young scientists regularly use social media like Facebook, to spam 'cute'  pics of their puppies or to vent opinion on terrorism; but rarely to share their research and science in general. Drop a 140 worded review in tweeter on an exciting paper that you read just now or post a recent popular science article on Facebook. Don't expect a buzz from your colleagues and peers. Rather expect a silence.

Facing some technical trouble in experiment? Post it to ResearchGate. Don't expect an answer from your Indian colleagues. Most of the answers would be from some one abroad. Many of your Indian peers and colleagues are there in ReseachGate and regularly update their publication profiles. But they will rarely engage in peer- discussions and debates there.

It is weird. Science is a social endeavor. Discussions, debates and sharing of information, over a coffee or in the Web, helps one to get enriched. I wonder why then my "Argumentative Indian" colleagues and peers are so silent on the Web. One of my cynic old colleague does have an answer for me though: "your Tweets are not counted at the time of promotion".