Pages

Physics of religion

Here's an interesting way to think about the spread of religion. Intuitively, it spreads like an infectious disease, so it could be modeled mathematically as a process of contagion, such as influenza or plague. Now some physicists have proposed modeling it like the process of crystallizaton:

Physicists make religion crystal clear
The rise and fall in the popularity of major religions can be described using the same mathematics that is used to model crystallization processes, claim physicists in Belgium. The researchers have modelled the time evolution of the numbers of adherents to religions and claim that their work sheds light on an important social phenomenon -- how a religion such as Christianity can grow rapidly from very small beginnings (Europhysics Letters (EPL) to be published).

Physicists have a long history of applying statistical models to the study of human behaviour and have tackled problems as diverse as the performance of financial markets and the spread of languages. Now, Marcel Ausloos and Filippo Petroni at the University of Liege have turned their attention to the dynamics of religion by relating the emergence, growth and demise of religions to phase transitions that occur during crystallization and other physical processes.

It seems to me that this sort of technique could be applied not only to religions as a whole, but also to smaller-scale phenomena, such as the explosive growth of evangelical Christian megachurches in the U. S. Many of these operations serve thousands of customers and seem to spring up almost overnight. They tend to be started by charismatic individuals who have a special flair for showmanship, such as Ted Haggard (blogged about here).

Of course, some of the ideas behind this line of thinking have a long history – in such fields as crowd psychology. It would be interesting if the research mentioned above could actually produce models that yield quantitative predictions for the nucleation and growth of megachurches in a geographical area, based on sociological variables (demographics, population density, etc.).

Tags: , , ,
Read More >>

Announcement of Philosophia Naturalis #6

Philosophia Naturalis #6 will be published on Thursday, February 1 right here at Science and Reason.

Don't miss this literally once-in-a-lifetime opportunity to have your favorite article included in Philosophia Naturalis #6. Remember, Philosophia Naturalis is the blogosphere's best blog carnival covering all of physical science and technology.

More information on suggesting articles is here.
Read More >>

Mantle plumes

Here's a "hot" controversy (in more ways than one) that you don't hear about often, unless you're a geophysicist. Geophysics doesn't get as much attention as many other topics in science, such as black holes, evolution, etc. But there are nevertheless very interesting open – or at least, controversial – questions in the field. One of these has to do with "mantle plumes".

A mantle plume, according to Wikipedia, is "an upwelling of abnormally hot rock within the Earth's mantle". For many years, the conventional wisdom has been that such plumes, when breaking through the Earth's crust where it is thinnest, on an ocean floor, are responsible for the formation of island chains, such as the Hawaiian islands. However, some skeptics have questioned whether plumes are really necessary to explain volcanic islands. Now there appears to be more evidence that the traditional idea is correct:

Hotspots or Not? Isotopes Score One for Traditional Theory
A running battle has evolved over the last 30 years concerning hotspots: One camp claims it is not necessary to invoke mantle plumes to explain such volcanic islands, and the other camp - a sizeable portion of the geological community - supports mantle plumes as the most internally consistent explanation for a wide variety of data.

A study published this week in the journal Nature raises the bar for plume opponents by finding a close correlation between modeled and observed ratios of uranium-series isotopes across eight island locations. The study strongly supports upwelling of mantle material as the source of these islands. Moreover, the detailed data allow researchers to estimate the change in temperature, speed and size of mantle plumes at the locations studied.


Tags: ,
Read More >>

New stars shed light on the past

New stars shed light on the past (1/8/07)
A new image from the Hubble Space Telescope shows N90, one of the star-forming regions in the Small Magellanic Cloud. The rich populations of infant stars found here enable astronomers to examine star forming processes in an environment that is very different from that in our own Milky Way.




N90 – Click for 1280×1249 image

Read More >>

Ngakalin DeepFreeze V.6

Kalo Deepfreeze-deepfreeze versi seblumnya dah banyak software buat ngakalin tu DF, tapi untuk DF terbaru belum setau saya belum ada. Disini ada sedikit akal-akalan yang bisa kita pake buat ngakalain DF di kompi orang lain agar dan supaya sesuai dengan konfigurasi yang kita inginkan:
1. Cari di drive tempat program files, file configurasi deepfreeze 6 yaitu $persi0.sys. Kalo masih ada, berarti kerusakan pada registry.
2. Install DeepFreezee 6 di kompi lain, kasih password / ga usah juga terserah.
3. Bikin disket / CD bootable yang support NTFS (jika partisi NTFS). Saya sendiri menggunakan BartPE bootableCD. Linux Live / Disket DosNTFS juga bisa dipakek.
4. Restart kompi, boot lewat CD /disket, cari file "$persi0.sys" , copy ke flash disk. Ukuran kira2 10MB. Selesai, matiin tuh kompi.
5. Restart kompi yg deepfreeze6- nya rusak, booting lewat lewat CD /disket, copykan file "$persi0.sys" , dari flasdisk ke drive tempat "$persi0.sys" berada. Replace file tersebut maksudnya, restart ke hardisk biasa, cek apakah bisa masuk konfigurasi.
6. Good luck, karena cara ini juga bisa digunakan menghack komputer laen dengan DeepFreeze6 agar menggunakan konfigurasi kita + password kita..
Read More >>

T cells

I'm seeing a number of interesting stories coming along about research into the immune system. Since this is a rather mysterious subject for most people, and it touches upon many other topics in health, medicine, and biotechnology, perhaps it's an appropriate time to start writing about it.

When I say "the immune system" I mean primarily the human immune system, although that has a lot in common with the immune systems of other mammals, and even other vertebrates.

In addition to obvious medical topics such as infectious diseases and auto-immune diseases, the immune system also has a lot of impact on cancer, diabetes, cardiovascular diseases, and other disorders. So let's get started, beginning with terminology, of which, unfortunately, there is a lot.

T cells are one type of immune system cell, which make up part of the class of "white blood cells". (To further confuse matters, "leukocyte" is a more technical synonym for "white blood cell".) More precisely, T cells belong to a subtype of white blood cells known as "lymphocytes", which are cells involved with the immune system. B cells are another very important kind of lymphocyte, but we'll focus mostly on the T cells here.

The immune system itself is considered to consist of two parts – the innate immune system and the adaptive immune system. The former consists of mechanisms that protect an organism from infection by invading parasites, without being very specific to kind. It is evolutionarily old and found even in plants. The latter part is evolutionarily newer, and seems to have first appeared in jawed vertebrates. It is capable of adapting to newly encountered pathogens and "remembering" how to stop them. Both B cells and T cells are part of the adaptive system. Another type of lymphocyte, the "natural killer" (or NK) cell, is part of the innate system.

T cells get their name from the organ known as the thymus, which is the location in which they mature. They provide cell-mediated immunity. (B cells provide a part of what is known as humoral immunity, which involve entities other than cells that assist with immunity, such as antibodies.) Some of the different types are cytotoxic T cells (also known – confusingly – as Tc, CD8+, or killer T cells), helper T cells (also known as Th or CD4+ cells), memory T cells, regulatory T cells (also known as suppressor T cells), and natural killer T cells (also known as NKT cells, and which should not be cofused with either Tc killer T cells or NK cells). You'll see various of these types discussed below. It's not necessary to memorize the types now. No snap quiz will be held.

The first research report we'll consider helps understand how killer T cells (Tc cells) attack and kill cancer cells. To begin with, all types of T cells have a molecule on their surface called a T cell receptor (TcR), but this occurs in highly variable forms. By definition, the Tc cell also has a molecule on its surface called CD8, which is a glycoprotein. Tumor cells (like any other cells) contain many large molecules on their surface, and a molecule which is unique for the particular kind of tumor cell is called an antigen. (Antigens occur in other contexts as well, such as in relation to viruses and bacteria.)

By virtue of the specific form of TcR on the surface of the Tc cell, the cell can interact only with other cells that present on their surface very specific parts (called epitopes) of specific antigens. When such an interaction successfully occurs and the Tc cell binds to the epitope, it is said to be "activated". This brings about important changes in the Tc cell. When this occurs, the Tc cell releases cytokines that cause it to proliferate, and also cell-killing toxins (cytotoxins) that induce cell death (apoptosis) in the tumor cell.

The first research report describes a technique for making actual movies of the T cell activity that occurs in destroying tumors. It also demonstrates how an experimental cancer vaccine can be used to activate Tc cells so that they can kill tumor cells.

Innovative Movies Show Real-time Immune-cell Activity Within Tumors
Using advanced new microscopy techniques in concert with sophisticated transgenic technologies, scientists at The Wistar Institute have for the first time created three-dimensional, time-lapse movies showing immune cells targeting cancer cells in live tumor tissues. In recorded experiments, immune cells called T cells can be seen actively migrating though tissues, making direct contact with tumor cells, and killing them.

In a nutshell, what the research showed was that mouse Tc cells would not kill tumors unless they had been activated with a vaccine based on the appropriate antigen. Further, once a Tc cell had destroyed its first tumor cell, the Tc cell began to migrate actively through the tumor to kill other tumor cells.

The second research report involves helper T cells (Th cells) instead of Tc cells. The two types of T cells are very similar except for several things. First, a Th cell contains a CD4 glycoprotein on its surface instead of CD8. The consequence of this is that a Th cell, once activated, binds to different kinds of cells, and in a different way, than a Tc cell does. Second, a Th cell does not produce cytotoxins, and so it cannot kill other cells directly. Instead a Th cell works indirectly through other immune system cells (primarily B cells, dendritic cells, and macrophages) to carry on immune system activities to clear an infection.

We need to explain a bit more about antigens and epitopes. An epitope (which is some fragment of an antigen) doesn't occur on a cell surface by itself, but instead is "presented" on the cell surface by a large molecule known as a major histocompatibility complex (MHC). Two types of MHC are important here: MHC class I and MHC class II. MHC class I molecules usually present epitopes that originate inside a cell, such as protein fragments specific to a cancer cell or a cell infected by a specific virus. Tc cells can bind only to MHC class I molecules combined with an epitope. MHC class II molecules usually present epitopes that originate outside a cell, such as fragments of proteins from a bacterial membrane. Also, MHC class II molecules generally occur only on special types of cells, called "professional" antigen presenting cells (which include B cells, macrophages, and dendritic cells). Essentially, the main purpose of such a cell is to carry epitopes that interact with other parts of the immune system.

Let's have an example of how some of this machinery works. Specifically, let's look at how the immune system deals with a flu virus. Suppose it is a virus that has recently mutated, so that the immune system has not encountered it before. It turns out that the immune system can handle the virus anyhow, but not very rapidly. A very important part of the immune system that deals with infections agents is known as an antibody. Antibodies are large proteins that consist of two parts. One part is the same for all antibodies of a particular class (and there are only five classes). The other part is highly variable, and can assume billions of different forms. Antibodies are manufactured in B cells, and there is a 1-1 correstpondence between B cells and antibodies: each type of antibody is manufactured by exactly one and only one type of B cell, depending on the precise DNA sequence in a part of the B cell's genome. The variability of B cell DNA is what gives the huge variety of antibodies.

The reason there are so many antibodies is that their function is to bind with (and only with) specific small parts of proteins – the epitopes – that occur in an antigen, such as a flu virus. The chances are very high that a flu virus will have in some fragment of one of its proteins a sequence that is the epitope some existing antibody binds to. When this happens, we say that the whole virus is an antigen for that antibody (and probably many others, for other epitopes).

When the new flu virus enters the body, however, it is quite possible that it does not immediately encounter any antibodies that bind to it. So the virus will go on to infect a cell somewhere. But cells are capable of killing at least some of the viruses that infect them. When this happens, some epitope from the virus is likely to be picked up and "presented" to the external environment of the cell by means of a class I MHC molecule, which exist in abundance on the cell membrane of most types of cells. Now, it turns out that the T cell receptor (TcR) molecule on the surface of a Tc cell is also highly variable, like an antibody, and will bind to a very limited number of epitopes. But eventually, a Tc cell that can bind will come along. When it does bind to the MHC molecule with an appropriate epitope it has never encountered before, the Tc cell is "activated". This results in two things (as we discussed earlier). First, the Tc produces cytotoxins that kill the infected cell. Second, it starts to proliferate rapidly, retaining its affinity for the epitope that activated it. Thereafter, the newly created Tc cells can spread throughout the body, killing other cells that are infected with the same virus.

There are two problems with this process, however. First, there will be a lot of dead cells as a result – not enough to do serious harm to the body, but not a good thing either. Second, the process takes a long time, allowing the virus to produce unpleasant and possibly harmful side effects, as anyone who's had a flu infection knows.

But there's good news too. There are many cells in the immune system called macrophages. They can swallow up foreign material like viruses – provided that this material has already been attached to some antibody. Otherwise, the macropage would go around swallowing up all sorts of things that need to be left alone. The main purpose of antibodies is to act as a flag to macrophages that the thing they're attached to needs to be destroyed. Now remember that there are also a few B cells around that manufacture antibodies that can bind to the virus via one of its epitopes. Although this will take some time, eventually the virus will get bound to an antibody, and somewhat later be swallowed up by a macrophage.

Something that is special about macrophages is that they can present epitopes on their surface using MHC molecules of class II. (B cells and dendritic cells are the other main kinds of cells that ordinarily have class II MHC molecules on their surface.) This will happen in the case of virus antigens that have been swallowed by the macrophage. The key thing is that the helper T cells (Th cells) are capable of binding to the MHC + epitope on the macrophage surface, provided the TcR of the Th cell matches the epitope. When that happens, the Th cell becomes activated, which will cause it to proliferate rapidly.

Now the B cells come into play again. They express some of the antibodies they produce on their cell surface, in addition to releasing them to the intercellular space. When one of these B cells encounters a virus having an epitope that binds to the antibody, the virus sticks to the cell, and is then swallowed and destroyed, just as with a macrophage. Of course, this happens a lot less than with macrophages, since the latter can trap a virus bound to any antibody, not only a very special one. But when it does occur, the epitopes of the virus are presented on the B cell surface via a class II MHC molecule. And so matching Th cells, which have been proliferating since being activated in an encounter with a macrophage, eventually come along. When one of those binds to the MHC + epitope of the B cell, it emits signal molecules (called cytokines) that cause the B cell to become activated. And that means that B cells that produce antibodies specific to the invading virus start to proliferate. From then on, things move more rapidly, until the virus infection is (hopefully) cleared.

Better yet, some of these virus-specific B cells hang around in the body, so that the next time the same virus is encountered, an immune system reaction gets going much more quickly. This is the reason that a person becomes immunized against a specific virus after the first exposure to it – if everything goes right. (Actually, there's also another way the infection can be "remembered", involving dendritic cells.)

This, of course, is how vaccines work. They include some form of the virus, or parts of it, that can't cause the full-blown disease, so that eventually the body will create a lot of B cells and antibodies specifically for the virus. (Neither the Th cells nor the Tc cells hang around for very long.) The problem is that this process doesn't work very well for a number of viruses, including HIV, which causes AIDS. This is why there hasn't yet been an effective AIDS vaccine developed.

But recently reported research has found a more efficient way to introduce virus epitopes into cells that are capable of including class II MHC molecules on their surface – such as epithelial (skin) cells. It's significantly more efficient, since it avoids the lengthy bootstrap process outlined above for generating lots of Th cells, because it doesn't rely on the need for existing antibodies in order to activate the Th cells by means of macrophages. See

Cellular Pathway Yields Potential New Weapon In Vaccine Arsenal
[The researchers] found that a surprising number of cells with MHC class II molecules on their surfaces used the autophagy pathway. In skin (epithelial) cells and two other types of immune cells (dendritic and B cells), 50 to 80 percent of the autophagosomes moved into the loading compartment for MHC class II molecules. “For types of cells that upregulate MHC class II upon inflammation — epithelial cells of infected organs, for instance — one could assume that they might actually use the autophagy pathway fairly frequently,” Münz says.

Then, to test the pathway’s effectiveness, the researchers targeted an influenza antigen directly to autophagosomes. They found that they were able to increase antigen presentation by MHC class II molecules, subsequently boosting helper T cell recognition of viral antigens.

The third piece of research we'll consider here involves one additional type of T cell not yet discussed: regulatory T cells (Treg for short). Such cells are thought to temporarily suppress the activity of the immune system, in order to avoid problems of overactivity. Such a regulatory mechanism can be important in controlling autoimmune diseases, preventing organ transplant rejection, or facilitating cancer immunotherapy.

Until recently, there was some doubt as to the existence of Treg cells as a distinct type of T cell, but the doubt is now overcome. The question now is, how to stimulate the production of Treg cells when they might be useful. As mentioned already, T cells mature in the thymus. During that process, early-stage T cells acquire the characteristics specific to Th, Tc, or other kinds of T cell, including Treg cells.

The researchers believe they have identified a process of "trans-conditioning" that makes the difference:

Scientists Show How Immune System Chooses Best Way To Fight Infection
"Our team has shown that a process known as 'trans-conditioning', which we knew to be involved in T cell development, actually has a profound influence on whether a T cell becomes an effector or a regulatory cell," explains Professor Adrian Hayday of King's College London. "This may be clinically significant; if we can find a way to influence this process, it may be possible to make the body produce effector T cells in a cancer patient or regulatory T cells in someone suffering from autoimmune disease, both of which are caused by the immune system malfunctioning."


Tags: , ,
Read More >>

Gamma-ray burst surprises in 2006

Gamma-ray bursts (GRBs) have been puzzling astrophysicists for a long time. A little over a year ago, the picture seemed to be getting clearer. But during the past year, it seems to have become rather more complicated. And the year saved the best surprise for last. This spurt of activity is largely due to a flood of data, obtained with the help of the Swift satellite, launched in November 2004 specifically to study GRBs. (See also here.)

GRBs first attracted attention because they were unlike any other highly energetic phenomenon known in the universe. A GRB includes a flux of gamma-rays, which are, by definition, the highest energy photons. But unlike other gamma-ray sources, such as occur when matter falls into a black hole, the gamma-ray flux persists only a short time – only a few minutes at most. Gamma-ray emissions associated with black holes persist indefinitely.

At first, it was not even known whether GRBs normally came from something in our own galaxy or something perhaps much farther away. After it was determined around 1992 that GRBs were distributed evenly across the whole sky, it was almost certain that most GRBs were not associated with our galaxy, since in that case they would be mostly located in the direction of the visible Milky Way.

After many more examples had been observed, it was apparent that GRBs were of at least two different types: short duration events ("short GRBs") lasting less than 2 seconds, and longer events ("long GRBs"), lasting from 2 seconds to a few minutes. When it became possible to obtain spectra from the galaxies associated with GRBs, and hence possible to determine approximate distances by means of the redshift, astrophysicists were surprised to find that long GRBs were usually extremely distant – more than 8 billion light-years. This implied that whatever caused the GRB had to be almost incredibly energetic, so that they were visible across so many billion light-years and their photons remained in the gamma-ray energy range, despite being red-shifted by a factor of 2 or more. (That is, the photon wavelength as observed was more than twice its length when it was emitted.) Short-duration GRBs, on the other hand, were found to be somewhat closer, and therefore their source must be 10 or 100 times less energetic.

It was easier to understand long GRBs, since almost the only explanation that could work was for the GRB to be produced in a supernova explosion in which most of the energy was concentrated into jets parallel to the axis around which the supernova's progenitor star rotated. It occurs because material ejected from the dying star just before its collapse forms an accretion disk around the newborn black hole. This material is subsequently sucked back into the black hole and produces particle jets and the blast of gamma-rays. The emitted photons have such high energy since most of the supernova energy is concentrated so narrowly. Eventually, accumulating observations confirmed that the characteristics of the light output in a long GRB was what would be expected in a very energetic supernova event. Such a supernova must result from the death of a very massive stars (more than 40 solar masses), and is sometimes called a "hypernova". This model is sometimes known as the "collapsar" model.

Short GRBs were harder to figure out, but eventually it appeared that they could be explained as the result of a collision between two neutron stars. The amount of energy emitted and the duration of the event appeared to be just about right.

I wrote here about these conclusions a little over a year ago. But the ink was hardly dry (so to speak) on that post before new findings emerged that suggested some short GRBs weren't fully with the program.

Two papers were published in December 2005, suggesting that not all short GRBs had the same origin:

Breakthrough in puzzle of giant explosions in space
The Hertfordshire team’s new result adds a further, unexpected twist to the tale: a significant proportion of short bursts seem to originate from galaxies much more local to us than those previously observed. These nearby short bursts, could, like their more distant brethren, result from the catastrophic collision of neutron stars, though if so then their outbursts must be much weaker. Alternatively they could be a fundamentally different kind of explosion. A prime candidate could be an exotic object called a magnetar — a lone neutron star with a magnetic field a hundred thousand billion times that of the Earth - tearing itself apart due to enormous magnetic stresses.

A second paper published at the same time was more specific about what might cause short GRBs that are more nearby, and hence less energetic. It leaned towards an explanation involving the merger of a neutron star and a black hole, rather than a magnetar disintegrating from magnetic stresses:

Witnessing The Flash From A Black Hole's Cannibal Act
An international team of astronomers reports the discovery of a third short gamma-ray burst, associated with a nearby elliptical galaxy. The low level of star formation in such galaxies and the detection of a second long-lasting flare indicate that this gamma-ray burst is most likely the final scream of a neutron star as it is being devoured by a black hole.

This paper was based on measurements of a GRB observed on July 24, 2005 (hence designated GRB 050724), as well as an earlier one (GRB 050509B). The first of these was located in a galaxy "only" about 3 billion light-years away. This showed that short GRBs might result from the release of 100 to 1000 times less energy than a typical long GRB. GRB 050724 had a longer "afterglow" than would be expected from a merger of neutron stars (which would collapse almost instantly to a black hole). But the afterglow would be consistent with the merger of a neurton star and a black hole, where the process begins with the neutron star being rent asunder, followed by the pieces falling into the black hole over a longer period of time.

Other accounts of these results can be found here, here, here, and here.

To summarize, as of December 2005, the most common type of short GRB was figured to be the result of a merger between two neutron stars, while atypical short GRBs could be either magnetars or the merger of a neurton star with a black hole. But new examples kept showing up.

In February 2006 a computer study showed that about 1% of short GRBs due to neutron star mergers should occur in globular clusters, which are tightly packed with stars, and so the chances of encounter are high. More normally, neutron star mergers should occur between stars that a part of a single binary system. But in fact from 10 to 30% of observed short GRBs occur in globular clusters, far more than would be expected. It was hypothesized that in the latter case, energy output would be less tightly beamed, and hence more likely to be observed. More details are here.

Just a little later, on February 18, a very unusual GRB was observed as part of a supernova event. Named GRB 060218, it was much longer than typical long GRBs – 33 minutes in duration. It was also relatively quite close (440 million light-years) and so much less energetic (by a factor between 10 and 100) than typical long GRBs. Remember this one – its importance will be described later. Details: here, here, here, here, here, here, and here.

In March three papers in Nature announced that observations from a number of ground-based and space-based instruments had confirmed that GRB 050904, which was first seen in September 2005, was the most distant GRB ever seen. Its redshift was measured to be 6.3, making it about 12.8 billion light-years away, and occurring when the universe was only about 900 million years old. The earliest previous GRB to be observed was dated to about 1.4 billion years after the big bang. The characteristics of GRB 050904 were typical of long GRBs, so the results show that such an event was possible at that early date. Details: here, here, here, here.

Starting later in March, additional doubts were expressed that short GRBs had a single, simple explanation in terms of merging neutron stars. First, analysis of short GRBs occurring on July 9 and July 24, 2005 showed X-ray flares minutes after the initial burst. Then a short burst on December 21, 2005 appeared to have the total energy of a typical long GRB – 10 times as much as the most energetic short GRBs known. Several models have been suggested for these and other anomalous bursts. And this is in addition to models involving magnetars, applicable to perhaps 10% of short GRBs. Here's a good summary of the situation: Cosmic Explosion Mystery Deepens.

In May, further analysis of the July 24 event showed that it radiated its energy in all directions. However, the December 21 event appeared to radiate its energy in narrow jets with opening angles between 4° and 8°. Because the energy was narrowly focused, the GRB appeard to be very bright, but the total energy was not as high as if it had radiated in all directions. Thus this event actually had an energy in the normal range for short GRBs. However evidence of jets from other short GRBs is sketchy. And it is difficult to explain jets in a neutron star merger model. The short GRB situation is looking rather messy. Reference: High-energy jets spew from short gamma-ray bursts.

As if all that were not enough, later in May an analysis of a short GRB that occurred on January 21 showed that the event was from 10.1 billion light years to 12.7 billions light years away – where it would be as distant as the long GRB of September 4, 2005. All previously measured short GRBs were no farther than 6.5 billion light years. Consequently, GRB 060121 might be as energetic as the most powerful long GRBs. This might mean the energy of the burst was concentrated in very narrow jets, so it was not as energetic as it appeared – as with GRB 051221. Alternatively, the characteristics of neutron stars in the early universe may have been different than more recent ones, though this seems like a stretch. Or perhaps some entirely different model, involving neither neutron stars nor supernovae is needed. Reference: Distant gamma-ray burst may be in class of its own.

Late in August, 4 papers appeared in Nature that gave detailed analysis of GRB 060218 – the one of very long duration (33 minutes) but low energy. The event has been put in a new class called an X-ray flash. It appears to have a jet structure and result from a Type Ic supernova, which involves the least heavy type of star (about 20 solar masses) that can go supernova when its hydrogen and helium supply is used up. Instead of leaving behind a black hole, its remnant may be a magnetar. (Unlike short GRBs possibly resulting from magnetars that self-destruct from their own magnetic fields, this GRB was produced in the supernova event.) References: here, here, here, here, here, here.

The news flow then went quiet for a few months. And then yet another surprising twist showed up. This involved observations of two more long but nearby and low-energy GRBs. Because they were of the long type, they (presumabley) did not involve neutron stars or magnetars. But these two seemingly did not involve supernovae either, unlike GRB 060218. The events occurred on May 5 and June 14 of this year. The findings were published in the December 21 issue of Nature.

GRB 060614 lasted 102 seconds and occurred at a distance of 1.6 billion light-years. (Nowadays that's considered relatively nearby.) In a long GRB due to a supernova, there is a rebrightening that lasts for days after the initial flare. This is the primary source of light that makes the supernova visible. It comes from the gravitational energy of collapse and the energy of fusion reactions which occur. If anything like that happened in these two cases, it must have been at least 100 times fainter than normal. GRB 060505 lasted only 4 seconds, which is still longer than a short GRB (under 1 second), and was somewhat more than 1 billion light-years distant.

Not only was there no evidence in GRB 060614 of the light normally seen following a supernova explosion, but it occurred in a galaxy with few young stars – the only kind that can go supernova, because they must be massive and short-lived. The problem is that it's very hard to understand the sustained emission of gamma-rays for 102 seconds except in a supernova event. It was also more energetic than the normal short GRB involving neutron stars. There are various speculations about what may have happened in these peculiar GRBs, but as yet no tenable models. It could have been a type of supernova collapse which produced little or no light. Or a merger involving neutron stars that continued to produce gamma-rays for an extended time, perhaps in a system of more than two neutron stars and black holes (a more complex version of GRB 050724). Or perhaps something else entirely. Apart from something exotic, one suggestion is that the distance estimate for GRB 060614 is off, and there was a visible supernova, but it was too far away to be visible.

This mystery will probably spawn various hypothetical models in the next year or two, and may lead to a better understanding of supernovae.

References: here, here, here, here, here, here, here, here, here.

If you have a subscription to Science here's a pretty good summary of the situation: Burst-Hunter's Rich Data Harvest Yields a Cosmic Enigma

Tags: , , , ,
Read More >>

Exercise and cancer

We all know that exercise is a Good Thing – for cardiovascular health, for weight control, to reduce risks of diabetes and metabolic syndrome, and even to ward off Alzheimer's disease and dementia (see here, here, here, here, here).

Now there is recent research that exercise might be beneficial in reducing risks of various kinds of cancer:


Tags: , ,
Read More >>

Portrait of a Dramatic Stellar Crib

Portrait of a Dramatic Stellar Crib (12/21/06)
Known as the Tarantula Nebula for its spidery appearance, the 30 Doradus complex is a monstrous stellar factory. It is the largest emission nebula in the sky, and can be seen far down in the southern sky at a distance of about 170,000 light-years, in the southern constellation Dorado (The Swordfish or the Goldfish). It is part of one of the Milky Way's neighbouring galaxies, the Large Magellanic Cloud.

The Tarantula Nebula is thought to contain more than half a million times the mass of the Sun in gas and this vast, blazing labyrinth hosts some of the most massive stars known. The nebula owes its name to the arrangement of its brightest patches of nebulosity, that somewhat resemble the legs of a spider. They extend from a central 'body' where a cluster of hot stars (designated 'R136') illuminates and shapes the nebula. This name, of the biggest spiders on the Earth, is also very fitting in view of the gigantic proportions of the celestial nebula - it measures nearly 1,000 light-years across and extends over more than one third of a degree: almost, but not quite, the size of the full Moon. If it were in our own Galaxy, at the distance of another stellar nursery, the Orion Nebula (1,500 light-years away), it would cover one quarter of the sky and even be visible in daylight.




Tarantula Nebula – click for 1280×1278 image

Read More >>

Art, fractal and otherwise

Need a little visual diversion? How about some art with a scientific and mathematical angle – fractal art?

Painting by numbers
[N]ature abounds with examples of fractals: branching rivers and blood vessels, swirling cloud systems, the repeating patterns of mountain ranges and the rocks that comprise them.

People have long looked at these patterns and been fascinated, but it was not until the 1960s, when computers became sufficiently powerful, that mathematicians, scientists and engineers began to create and investigate fractals in their infinite detail.

It's been a fruitful endeavor. Fractal science allows researchers to perceive order in apparent disorder. Fractal concepts have been used to analyze the distribution of galaxies in the universe, the frequencies of economic cycle indices and the probabilities of earthquakes and wildfires.

If you're just interested in some relevant links, the article offers some for a couple of fractal artists: Kerry Mitchell and Janet Parke.

Art and science/mathematics get along very well, I think, as I sort of suggested not long ago.

Bathsheba Grossman's art mentioned in that recent post is not purely mechanical in the way (some) fractal art is, in that the latter may be (though it isn't always) generated purely by computer algorithms. Grossman's work, as with some fractal art, exhibits imaginative human intervention in a number of ways. And the same can be said of other forms of expression now regarded as "art" with little dispute, such as photography.

Even when the subject of a photograph is captured purely mechanically by a camera and reproduced mostly mechanically, the artist's creative intervention is still involved in various ways, such as choice of subject, waiting for the "decisive moment", cropping of an image in the camera or afterwards, lighting, and so forth. And that's before indisputably creative activity in manipulation of photographic images in the printing process or (more recently) by digital means.

But let's consider fractal art that consists purely of the execution of a computer algorithm. Is that still art? I think it can be, because the "creator" of such a work still chooses the algorithms, the initial inputs, and various parameters of the algorithm.

Still, some people may question whether it's art. Now, I don't feel a need to change anyone's mind about that, since art is still ultimately something perceived in the mind of the beholder. If it doesn't work for you as "art", then it ain't art – for you.

Caution: beyond this point I'm just going to ramble a bit, without any pretense of being rigorous or scientific. If you have little patience for that sort of thing, you can cease and desist reading right here.

I recently witnessed an online discussion among intelligent people about the ancient question of "what is art?" Though I refrained from joining that discussion, there were some points raised, which probably always are in this sort of discussion, and which I wanted to reply to. So I'll do it now.

One point is the assertion that "art must communicate some message". But that assertion can lead to further questions. For instance, can we analyze such communication in an information theoretic way, à la Claude Shannon? And what is a "message" in the first place?

People with a scientific or engineering bent are particularly wont to attempt such an analysis, but I have my doubts about that approach. I would simply ask, what is the message, if any, that is communicated in a work by someone like Jackson Pollack? I don't know. Perhaps someone could contrive to find a message in a Pollack painting.

But that doesn't seem necessary to me. I can still enjoy a Pollack painting because it engages and stimulates my visual sensory apparatus, and it is "good" art because the pleasure of the stimulation it provides doesn't quickly become tiresome and lose its ability to engage. To this way of thinking, some of the better examples of fractal art (at least) also deserve to be called art, even if no specific "message" is communicated.

Another problem with the idea of "message" and "communication" being present in a proper work of art is the subject of much "postmodern" analysis of art, and in particular the "deconstruction" of a work of art. There's no way I can possibly do justice to this point of view in the space of a few paragraphs – people write long, convoluted books about such things. Nevertheless, my understanding of this idea, sketchy as it may be, is that postmodernism argues against inherent "meaning" in a cultural artifact, because most of the "meaning" actually depends on cultural context that the artifact implicitly references. And "deconstruction" of a work of art (or of other cultural artifacts like "messages" and "narratives") is the process of making explicit the cultural frames, assumptions, context, abstractions, metaphors, categories, etc., to which the artifact makes allusion, and without which the artifact cannot be understood or said to have any particular meaning at all.

A fine example, it seems to me, would be the Cycladic statues, such as have just been in the news on account of recent excavations, though many instances have long been known. The news article explains
The Cycladic culture — a network of small, sometimes fortified farming and fishing settlements that traded with mainland Greece, Crete and Asia Minor — is best known for the elegant figurines: mostly naked, elongated figures with arms folded under their chests. It flourished in 3200-2000 B.C.[E.], then was eclipsed by Crete and Mycenaean Greece.

But in spite of this antiquity, the figures seem like very modern abstract art and appeal to modern artistic sensibilities. Yet we know almost nothing about the culture in which the figures were created, and have very little idea of what they "meant" to people of that culture.
The figurines were made following a pattern that changed little over 800 years. They have been variously interpreted as depicting gods or venerated ancestors, serving as replacements for human sacrifice, grave goods — even children's toys.

Might we say, then, that art can be appreciated even if we don't know what a piece of art "means", even if it doesn't have an invariant, unambiguous meaning? And further, that what's important in art – whether it be fractal forms, Jackson Pollack paintings, Ansel Adams photos, or Cycladic statues – is its ability to capture our attention (at least for awhile) and to "entertain" us and our senses?

Indeed, as I'm writing this, I'm also listening to some Beethoven piano sonatas. Music is certainly an example of a type of art which is primarily appreciated, without apology, as a form of entertainment, sensual gratification. (Even though it is well recognized that there is a such a thing as "program music", in contrast to "absolute music".) This is as true of Beethoven's music as it is of the music of the recently deceased James Brown (about whom and whose music I know essentially nothing).

Fine. Now having said all that, I'm going to reverse direction and consider the opposite point of view: art as message. In the first place, there's a lot of art (or what is asserted by some to be art) which is not entirely pleasing either to the senses or to the reflective mind. For example, novels of Dostoevsky or Kafka, or paintings by Pablo Picasso (Guernica) or Francisco Goya (The Third of May). Zillions of other examples could be cited, many of which might rather more likely be described as "disturbing" or "emetic", rather than as "entertaining" or "pleasing". About the only thing that could be described as pleasing about such works is the intellectual pleasure of grasping their message. That's certainly a valid kind of pleasure, but still...

A particular type of art in this category came up in the discussion mentioned previously. Or rather two related types: found art and performance art. An early example of found art is Marcel Duchamp's Fountain – a urinal. Examples of performance art can be found in the work of Karen Finley, involving (according to Wikipedia) "graphic depictions of sexuality, abuse, and disenfranchisement."

What's "entertaining" about this sort of thing (apart, perhaps, from erotic elements and fetishes)? Well, not necessarily anything, in any customary sense of "entertaining". The art here, if any, resides in its message. But many people assert that they can scarcely, if at all, see any real message. The example cited in discussion, if I recall correctly, was a load of trash dumped on the lawn before a civic building. "You call that art?" many people ask rhetorically.

Well, yes, as a matter of fact, provided one allows as art artifacts or performances which use at least some modicum of imagination to convey a message. What I would say is that in many cases people don't recognize the message because they don't like the message, though they in fact perceive it subliminally at some level. Further, a difficulty in perceiving the message results when the viewer does not share much of the artist's conceptual framework. In other words, as postmodernists point out, most or all of the meaning of an artifact or performance resides in its cultural context, categories, allusions, etc.

Just as one can't appreciate a novel written in a language one doesn't know, one can't (fully) appreciate a nonverbal artifact if one doesn't know all the concepts and associations that the artifact embodies for the artist. Communication can't effectively occur unless there is a sufficient amount of shared conceptual space. (Try explaining diffeomorphisms to someone who doesn't even know calculus.) And communication must occur for art of the "message" sort to be worthwhile – as opposed to art of the "sensory" sort which appeals directly to the human perceptual apparatus in one or more modalities.

Regarding message art, I've coined an aphorism which, as far as I know, is original. "Art is how we try to explain us to ourselves." Here, "us" could be humans in general, or a specific cultural group. Obviously, this applies mostly to art as a form of communication. When people disagree with this, I take it that they are thinking of more sensory kinds of art. Or perhaps, simply recognizing that message art doesn't always articulate answers and explanatations – sometimes only nagging questions. (Note to self: some other time go into the etymology of words like articulate, artifact, artifice, artificial, etc.)

Time to wrap up. People disagree about what is or is not art because there are acually two rather different things that people can talk about in the category of art. These two things are analogous to what in musicology is called program music vs. what is called absolute music.

This isn't an especially deep or profound observation. In terms of neurobiology, all we're talking about is stuff that goes on in the frontal cortex and other regions that support cognitive functions vs. the stuff that goes on in perceptual regions (e. g. parietal and occipital lobes) and supporting regions that mediate emotions, like the amygdalae.

After all, this is a science blog, so you knew the discussion had to come down to physical realities eventually, didn't you? Nobody here but us reductionists, boss.

Note: This might be a topic to get some good comments on. So if anything I've said here touches a nerve, feel free to comment away.

Tags: , , , , ,
Read More >>

Philosophia Naturalis #5 has been published

Chris Rowan at Highly Allocthonous has posted the 5th edition of Philosophia Naturalis. It's in the form of a great essay about the functions of science blogging. Don't miss it!
Read More >>

Virtual reality to get its own network?

This could be very interesting if it's not, as some suggest, a scam:

Virtual reality to get its own network?
A nonprofit group says it plans to build a network called Neuronet purely to support virtual-reality game and business applications.

Neuronet, which is planned to be separate from the Internet, "will evolve into the world's first public network capable of meeting the data transmission requirements of emerging cinematic and immersive virtual-reality technologies," according to a Thursday announcement from the Vancouver-based International Association of Virtual Reality Technologies.

For more, see the home page of the group that's promting this: International Association of Virtual Reality Technologies

For the skeptical appraisal, see Group promises dedicated VR "Neuronet," skepticism ensues and Is Neuronet A Scam?

If this thing isn't a real project, it should be. If you look at the success of Second Life, you can perhaps imagine where this could go with high-quality video data and user-side equipment to create a "virtual reality" experience.

This particular project may not be for real. But just wait 10 years or so. The applications won't be just game playing. This is the future of business teleconferencing (big bucks there), and eventually virtual gatherings of families and friends.

Tags: ,
Read More >>

Top five nanotech breakthroughs of 2006

Here's an interesting top-something list, from Forbes – nanotechnology:

Top Five Nanotech Breakthroughs Of 2006
This year saw a slew of remarkable nanotech breakthroughs, and narrowing down the top five was no easy task. One major theme of 2006 was the intersection of computing and biology--integrated circuits were used to study everything from neural activity to tissue dynamics, and disposable bio labs-on-a-chip became a reality.

As usual, one can take issue with some of the citations or suggest others. But what's especially interesting here is that in each item, there are actually multiple instances of progress in the same general area. Allow me to illustrate this with several examples.

1) DNA ORIGAMI

There are reports on the work in question here, here, here, and here. This work involves constructing nanoscale objects out of DNA molecules. There is, in fact, a whole subfield of nanotechnology centered around the use of DNA. It's called, DNA nanotechnolgy (unsurprisingly). Prof. Ned Seeman of NYU has been a leader in this field. Some of his references are here (with some nice graphics), here, here, and here. Seeman's laboratory most recently reported a "nanorobotic arm" using DNA – see here, here, and here.

And here's some additional news this year related to DNA nanotechnology:


2) NANOMAGNETS TO CLEAN UP DRINKING WATER

This research obviously has immense real-world importance. But it isn't so much an example of a major area of nanotech activity. Anyhow, here's an overview of the work: Cleaning Up Water with Nanomagnets. The original work was published in Science (November 10, 2006): Low-Field Magnetic Separation of Monodisperse Fe3O4 Nanocrystals.

3) ARRAYS CONNECT NANOWIRE TRANSISTORS WITH NEURONS

Nanowires of various kinds have been big news this year. For examples, see here, here, here, here, here, here, and here.

Similarly, there have been a number of results with interfacing electronics and neurons, for such things as controling prosthetic limbs and playing computer games. One of the more interesting examples is the recent report of a small robot controled through a neuro-electronic interface. Carbon nanotubes have also been used for neuro-electronic interfaces.

But the work mentioned in the Forbes article, where silicon nanowires only 20 nanometers wide can detect signals at as many as 50 places on a single neuron, is certainly impressive. See here, here, or here for details.

Other research into interfacing neurons and carbon nanotubes: here.

4) SINGLE NANOTUBE ELECTRICAL CIRCUITS

Research involving carbon nanotubes is probably the most active area in the whole field of nanotechnology. The examples are far too numerous to mention individually.

Reports on the research referred to in the Forbes article can be found here, here, here, here, here, here, here, here, here, and here.

Other uses of carbon nanotubes in electronics are reported here, here, here

5) NANOPARTICLES DESTROY PROSTATE CANCER

There's a general problem with uses of advanced drugs as therapeutics, especially for cancer and in gene therapy – delivering the drugs as specifically as possible to the organs or tissues where the drug should be active, while avoiding tissues where the drug could be unnecessarily harmful. Chemotherapy is perhaps the principal example of this problem. It is possible to design nanoparticles which gain entry only to certain types of cells, so encasing a drug inside such a particle may solve the problem.

Research involving chemotherapy for prostate cancer was reported in April of this year, and is a noteworthy example of this approach. Reports about the research can be found here, here, here, here, here, here, and here. An especially long and informative article about MIT cancer research, including the nanoparticle work, is here. Here's a more general overview: Tumor-Seeking Nanoparticles.

Nanoparticles can also be used to deliver imaging or contrast agents to cancer cells in order to make them easier to detect. There have been a number of other research results reported this year involving nanoparticles for drug delivery or imaging. A few recent examples, just since October:


-------------------

For another review of important nanotechnology results this year, with many links, take a look at: The Year in Nanotech – Dazzling displays, handheld sensors, cancer killers, and nanotube computers.

-------------------

Tags: , , , ,
Read More >>

Clues to the origins of life

The question of how life originated on Earth is one of the really big open questions for science. Right up there with questions like how the universe itself started and how the human mind works.

Questions about how life began have been asked for a long time, of course. But only within roughly the last 50 years, since DNA and related biochemistry began to be understood, has it been possible to address such questions scientifically.

DNA, and its very close relative RNA, provide the framework for one essential of life: the storage of information, which allows for "blueprints" that describe a living organism to be conveniently encoded, so that individual organisms can be duplicated and, ultimately, evolve into more complex organisms. We now understand pretty well how DNA and RNA work, so one key question now is – how did DNA and RNA, the carriers of genetic information, come about?

DNA and RNA are made up of relatively simple organic molecules – sugars and phosphate groups that can polymerize to form a backbone, and a small number of bases which encode information by the way they are ordered in their attachment to the backbone. The information encoded in DNA details how to make proteins, which are also polymeric organic molelcules, consisting of amino acids attached to each other in a sequence specified (mostly) by the DNA. It is the proteins that make up the bulk of the cellular machinery that constitutes a stand-alone single-celled organism, or by grouping together makes a multi-celled organism. So a large part of the question of life's origins comes down to that of how these various organic chemicals came to exist.

In addition to the organic chemicals that make up an organism, another necessity of life is the ability to utilize energy that is ultimately obtained from the environment. In most cases, this energy is derived from sunlight, although in a few rare cases it can come from radioactive elements. Either way, an organism needs to tap into the environmental energy in order to drive chemical reactions which power cellular mechanisms that enable reproduction, locomotion, and (in multicellular organisms) growth. (More complex organisms can also derive their energy from "food", in the form of simpler organisms that have stored up environmental energy obtained more directly.) So another key question is: when and how did these energy-management processes come about?

There have been recent research findings that are relevant to various of these questions.

Let's consider the origins of organic compounds first. One line of thinking is that organic compounds were primarily synthesized from inorganic compounds in natural processes here on Earth. The names Aleksandr Oparin and J. B. S. Haldane are associate with this idea. The classic experiment testing the idea is known as the Miller-Urey experiment, after Stanley Miller and Harold Urey, and was first conducted in 1953, the same year that the structure of DNA was identified by Francis Crick and James Watson. As yet, this is still just a conjectural possibility.

An alternative scenario for the origins of organic compounds is that some simple ones formed in space, which is known to happen, and that some of the basic building blocks of life, such as amino acids, were introduced to Earth on meteorites. This possibility has gained more plausibility from the recently announced finding of apparent "organic materials" in a meteorite that fell in 2000.

NASA Scientists Find Primordial Organic Matter In Meteorite
In a paper published in the Dec. 1 issue of the journal Science, the team, headed by NASA space scientist Keiko Nakamura-Messenger, reports that the Tagish Lake meteorite contains numerous submicrometer hollow organic globules.

Because the meteorite immediately became frozen in ice after it landed, the possibility of contamination from terrestrial material was minimized. Further, the isotopic composition of hydrogen and nitrogen in the globules is quite unlike what is normally found on Earth. It also appears that the material in the meteorite formed at least 4.5 billion years ago – before the Earth and the other planets themselves.
"The isotopic ratios in these globules show that they formed at temperatures of about -260° C, near absolute zero," said Scott Messenger, NASA space scientist and co-author of the paper. "The organic globules most likely originated in the cold molecular cloud that gave birth to our Solar System, or at the outermost reaches of the early Solar System."

Additional references:

Just about two weeks later, results from a completely different souce appeared that also showed the existence of organic compounds in primorial solar system material. This was from the Stardust mission to retrieve grains of matter from the comet 81P/Wild-2:

Comets hold life chemistry clues
Scientists studying the tiny grains of material recovered from Comet Wild-2 by Nasa's Stardust mission have found large, complex carbon-rich molecules.

They are of the type that could have been important precursor components of the initial reactions that gave rise to the planet's biochemistry.

Unlike the case with the Tagish Lake meteorite, it was possible to identify many of the organic compounds in the returned material:
These Wild-2 compounds lack the aromaticity, or carbon ring structures, frequently found in meteorite organics. They are very rich in oxygen and nitrogen, and they probably pre-date the existence of our Solar System.

"It's quite possible that what we're seeing is an organic population of molecules that were made when ices in the dense cloud from which our Solar System formed were irradiated by ultraviolet photons and cosmic rays," Dr Sandford explained.

"That's of interest because we know that in laboratory simulations where we irradiate ice analogues of types we know are out there, these same experiments produce a lot of organic compounds, including amino acids and a class of compounds called amphiphiles which if you put them in water will spontaneously form a membrane so that they make little cellular-like structures."

Additional information from the special Stardust issue of Science (December 15, 2006 – sub. rqd. for full access):

Although these results indicate that organic material formed in or before the earliest stages of the solar system might have seeded organic chemistry on Earth, there is as yet no evidence that this actually is how it happened. An even more radical possibility is that actual living carbon-based organisms that originated outside of our solar system "transplanted" life to Earth. This idea is known as panspermia, but so far, there's little or no credible evidence for it. Short of that, we know at least that the organic compounds for life either originated on Earth or arrived from outside.

So let's move on and turn to the question of how the earliest organisms managed energy supplies in order to reproduce and move. Every organism on Earth that produces energy from the chemical processing of carbohydrates, fats, and proteins uses, a complex series or reactions known as the citric acid cycle (also known as the Krebs cycle). (There are other energy-producing processes, of course, such as photosynthesis.) The question to be answered is how this complex series of reactions first arose:

New Insights Into The Origin Of Life On Earth

In an advance toward understanding the origin of life on Earth, scientists have shown that parts of the Krebs cycle can run in reverse, producing biomolecules that could jump-start life with only sunlight and a mineral present in the primordial oceans.

The Krebs cycle is a series of chemical reactions of central importance in cells -- part of a metabolic pathway that changes carbohydrates, fats and proteins into carbon dioxide and water to generate energy.

Since the cycle can run backwards, it is possible to identify an inorganic compound that may have kickstarted the process:

Nature's Jump-Starter
Reporting in next week's Journal of the American Chemical Society, researchers at Harvard University say they may have found at least one of the original players. Called sphalerite, the compound is a mix of zinc and sulfur ejected from hydrothermal vents and known to have been plentiful in Earth's early seas. Geochemist and co-author Scot Martin says the team's new lab experiments show that when immersed in sterile water and exposed to sunlight, sphalerite can create three of the five basic organic chemicals necessary to start the Krebs cycle in relatively quick fashion. Further research is needed to isolate the other compound or compounds that could have produced the remaining two Krebs ingredients, he notes. If scientists can find their sources, then they will know that the five chemical foundations of the Krebs cycle were being manufactured easily and routinely in Earth's early oceans.

In addition to relatively simple organic chemical building blocks and chemical reactions that can release energy to make an organism that is "alive", there is a third prerequisite for life: some method of storing information about an organism's composition and structure so that the organism can replicate itself, instead of simply disappearing after each generation. In other words, genetic material.

Today, that genetic material consists of DNA and RNA, which in turn are made up of a handful of bases that act as symbols encoding the genetic message and are arranged along a linear backbone of simple sugar and phosphate groups. But are these the only possible chemical entities that can perform this kind of function?

In the past, other possibilities have been suggested, such as peptide nucleic acids (PNAs). A PNA has a backbone formed of simple molecules consisting of carbon, nitrogen, hydrogen, and oxygen. These are liked together by peptide bonds, which form when H- and OH- units from two molecules combine to form H2O, leaving the original molecules joined to each other. Such peptide bonds also form the backbone of proteins. But unlike proteins, PNAs have DNA-like bases attached to the backbone instead of amino acids. However, PNAs do not occur naturally, so they do not seem to have played a role in life on Earth.

If there are other ways of structuring a backbone, perhaps comparing them to what is actually used in RNA (the sugar known as ribose) and DNA (the sugar deoxyribose) would suggest why the latter proved to win out. That was the idea behind this research:

Uncovering DNA's 'Sweet' Secret
“These molecules are the result of evolution,” said Egli, professor of Biochemistry. “Somehow they have been shaped and optimized for a particular purpose.”

“For a chemist, it makes sense to analyze the origin of these molecules.”

One particular curiosity: how did DNA and RNA come to incorporate five-carbon sugars into their “backbone” when six-carbon sugars, like glucose, may have been more common? Egli has been searching for the answer to that question for the past 13 years.

Recently, Egli and colleagues solved a structure that divulges DNA's “sweet” secret. In a recent issue of the Journal of the American Chemical Society, Egli and colleagues report the X-ray crystal structure of homo-DNA, an artificial analog of DNA in which the usual five-carbon sugar has been replaced with a six-carbon sugar.

It was found that homo-DNA is more stable that DNA/RNA and it allows a wider variety of bases to be attached. So why didn't it prevail?
[D]espite homo-DNA's apparent versatility in base pairing and its thermodynamic stability, other features of the molecule's architecture probably preclude it from being a viable genetic system

For example, it cannot pair with other nucleic acids — unlike DNA and RNA which can and must pair with each other. Also the steep angle, or inclination, between the sugar backbone and the bases of homo-DNA requires that the pairing strands align strictly in an antiparallel fashion — unlike DNA which can adopt a parallel orientation. Finally, the irregular spaces between the “rungs” prevent homo-DNA from taking on the uniform structure DNA uses to store genetic information.

The findings suggest that fully hydroxylated six-carbon sugars probably would not have produced a stable base-pairing system capable of carrying genetic information as efficiently as DNA.

So that variation didn't work out. But what about the possibility of using a different set of bases than the purines and pyrimidines which actually occur? That was investigated in this study:

Origin Of Life: The Search For The First Genetic Material
To find the right track in searching for the origins of life, the team is trying to put together groups of potential building blocks from which primitive molecular information transmitters could have been made. The researchers have taken a pragmatic approach to their experiments. Compounds that they test do not need to fulfill specific chemical criteria; instead, they must pass their “genetic information” on to subsequent generations just as simply as the genetic molecules we know today—and their formation must have been possible under prebiotic conditions. Experiments with molecules related to the usual pyrimidine bases (pyrimidine is a six-membered aromatic ring containing four carbon and two nitrogen atoms), among others, seemed a good place to start. The team thus tried compounds with a triazine core (a six-membered aromatic ring made of three carbon and three nitrogen atoms) or aminopyridine core (which has an additional nitrogen- and hydrogen-containing side group). Imitating the structures of the normal bases, the researchers equipped these with different arrangements of nitrogen- and hydrogen- and/or oxygen-containing side groups.

Unlike the usual bases, these components can easily be attached to many different types of backbone, for example, a backbone made of dipeptides or other peptide-like molecules. In this way, the researchers did indeed obtain molecules that could form specific base pairs not only with each other, but also with complementary RNA and DNA strands. Interestingly, only one sufficiently strong pair was formed within both the triazine and aminopyridine families; however, for a four-letter system analogous to the ACGT code, two such strongly binding pairs are necessary.

The conclusion was that the critical factor affecting the composition of modern genetic material was the structure of the bases rather than the structure of the backbone. It was necessary to have only certain bases which are capable of pairing up in specific ways, as occurs in double-stranded DNA and DNA-RNA combinations.

Tags: , , , , ,
Read More >>