Welcome to Sott.net
Tue, 23 Apr 2019
The World for People who Think

Science & Technology

Comet 2

Asteroid G6478 now has two tails and scientists think it's breaking apart

6478 Gault

Thanks to an impressive collaboration bringing together data from ground-based telescopes, all-sky surveys and space-based facilities — including the NASA/ESA Hubble Space Telescope — a rare self-destructing asteroid called 6478 Gault has been observed.
In space, no one can hear an asteroid scream. But astronomers just used the Hubble telescope to see one destroying itself.

A 2.5-mile-wide asteroid called 6478 Gault was first discovered in 1988, and it seemed like many of the other 800,000 known space rocks.

But in January, astronomers saw something strange in survey telescope images: Gault had become "active" and sprouted a big, bright tail - much like a comet's - that stretched more than 500,000 miles long. A dimmer second tail was found several weeks later.

Some space rocks that initially look like asteroids are later found to be comets when they pass close to the sun. The boost in solar energy can warm up ice and other frozen compounds hidden under layers of dust, turning those materials into gases and leading the rock to spew out comet debris to form a long, glowing tail.

Comment: As noted in Something only EU can explain: Asteroid 6478 Gault 'suddenly sprouts a comet-like tail':
It IS a 'comet' because the only difference between an asteroid and a comet is that the latter is glowing from electrical discharge.

Gault didn't seem to fit the bill, though, since it lurks about 214 million miles away from the sun in a fairly circular orbit between Mars and Jupiter. In other words, it never swung close to the sun. So scientists wondered if another space rock had collided with Gault, splashing its dusty guts all over space.

Comment: Scientists are documenting a lot of 'rare' activity which they are struggling to explain because, despite the advance in technology which can monitor space rocks, their current theories have yet to incorporate the electro-magnetic nature of space: Also check out SOTT radio's:

Cell Phone

'Technoference' - The disruptive impact of mobile phones

Smart Phone
© d3sign/Getty Images
Convenient they might be, but for an increasing number of people mobile phones are also disruptive.
New Australian research highlights the increasing risk of "technoference'" - the disruptive impact of mobile phones.

Nearly a quarter of women surveyed and 15% of men could be classified as problematic mobile users, the researchers say. That jumps to 40.9% for the 18-to-24 age group.

And it's a rapidly escalating problem.

The researchers, led by Oscar Oviedo-Trespalacios from the Centre for Accident Research and Road Safety at Queensland University of Technology, surveyed 709 people aged 18 to 83 in 2018, using questions replicated from a similar survey in 2005.

They then compared the findings and discovered significant increases in people blaming their phones for everything from losing sleep to becoming less productive or taking more risks while driving.

Today 19.5% of women 11.8% of men say they lose sleep due to the time they spend on their mobile phone, compared with just 2.3% and 3.2% respectively in 2005.

One in eight men say their productivity has decreased as a direct result of the time they spend on their mobile - compared to none in 2005.


These giant viruses have weaponised CRISPR against their rivals and bacterial hosts

Bacteriophage viruses
Bacteriophage viruses attacking an E. coli cell
Hundreds of huge, bacteria-killing viruses have been newly discovered lurking in all kinds of environments, including our guts. Their massive genomes code for many proteins not found in smaller viruses, including CRISPR systems used to attack both their bacterial hosts and rival viruses.

The massive viruses have long gone unnoticed because the standard methods used to look for bacteria-killing viruses, or bacteriophages, literally filter them out. Instead, a team has found them by looking at all the DNA present in a variety of samples, an approach known as metagenomics.

The researchers then pieced together the genomes of the huge phages using a method developed by team leader Jill Banfield at the University of California, Berkeley.

Bacteriophages are the most common entities on Earth. There can be many millions in a drop of seawater. Almost all known phages have genomes tens of thousands of DNA letters long at most. Larger ones were thought to be very rare. But earlier this year, Banfield's team reported finding more than a dozen phages with genomes up to 540,000 DNA letters long in the guts of humans and animals.

Comment: See also:


Information Realism: Why physics is pointing inexorably to mind

mind physics
© Getty Images
In his 2014 book, Our Mathematical Universe, physicist Max Tegmark boldly claims that "protons, atoms, molecules, cells and stars" are all redundant "baggage." Only the mathematical apparatus used to describe the behavior of matter is supposedly real, not matter itself. For Tegmark, the universe is a "set of abstract entities with relations between them," which "can be described in a baggage-independent way" - i.e., without matter. He attributes existence solely to descriptions, while incongruously denying the very thing that is described in the first place. Matter is done away with and only information itself is taken to be ultimately real.

Comment: Tegmark is right about the importance of information, but his formulation is the epitome of what is wrong with scientific abstraction: the fallacy of misplaced concreteness. As R.G. Collingwood wrote in Speculum Mentis:
For it must be borne in mind that the abstract concept is nothing but the abstract structure of the sensible world, and therefore if the concept alone is real the world whose structure it is will be mere appearance and not reality, and therefore the concept will be a class whose members are not real.

... Mathematics is nothing but the assertion of the abstract concept, and it can give us no account of the presuppositions of this assertion. Mathematical logic is only the shadow of science itself. It is the truth, but the truth about nothing: it is the description of the structure of a null class. Hence, though the hypotheses of empirical science must have some kind of categorical basis, they cannot find this in mathematics, which is the very distilled essence of hypothesis itself. The abstract cannot rest upon the more abstract, but only on the concrete.

This abstract notion, called information realism is philosophical in character, but it has been associated with physics from its very inception. Most famously, information realism is a popular philosophical underpinning for digital physics. The motivation for this association is not hard to fathom.

Indeed, according to the Greek atomists, if we kept on dividing things into ever-smaller bits, at the end there would remain solid, indivisible particles called atoms, imagined to be so concrete as to have even particular shapes. Yet, as our understanding of physics progressed, we've realized that atoms themselves can be further divided into smaller bits, and those into yet smaller ones, and so on, until what is left lacks shape and solidity altogether. At the bottom of the chain of physical reduction there are only elusive, phantasmal entities we label as "energy" and "fields" - abstract conceptual tools for describing nature, which themselves seem to lack any real, concrete essence.

Comment: Only if we commit ourselves to believing solely in our own abstractions.

Microscope 1

Michael Behe responds to Prof. Lenski: Most random mutations ARE damaging

Richard Lenski
© Zachary Blount [CC BY-SA 4.0], from Wikimedia Commons
Richard Lenski
This is the third in a series of posts responding to the extended critique of Darwin Devolves by Richard Lenski at his blog, Telliamed Revisited. Professor Lenski is perhaps the most qualified scientist in the world to analyze the arguments of my book. He is the Hannah Distinguished Professor of Microbial Ecology at Michigan State University, a MacArthur ("Genius Award") Fellow, and a member of the National Academy of Sciences with hundreds of publications. He also has a strong interest in the history and philosophy of science. His own laboratory evolution work is a central focus of the book. I am very grateful to Professor Lenski for taking time to assess Darwin Devolves. His comments will allow interested readers to quickly gauge the relative strength of arguments against the book's thesis.

Unintended Consequences

I have already addressed several of the issues that Lenski has raised at his blog in his third post on Darwin Devolves, "Is the LTEE Breaking Bad?," in my responses (here, here, and here) to the review by my Lehigh colleagues, because they cited his work frequently. Nonetheless, repetition is a fine teaching tool. So here I will again speak to those issues and also address a few others.

In "Is the LTEE Breaking Bad?" Lenski agrees that the beneficial mutations seen in his Long Term Evolution Experiment are overwhelmingly degradative or loss-of-function ones. Even so, that does not concern him because "the LTEE represents an ideal system in which to observe degradative evolution." Beneficial degradative changes are only to be expected there, it seems.
The LTEE was designed (intelligently, in my opinion!) to be extremely simple in order to address some basic questions about the dynamics and repeatability of evolution, while minimizing complications. It was not intended to mimic the complexities of nature, nor was it meant to be a test-bed for the evolution of new functions. The environment in which the bacteria grow is extremely simple. ...

Indeed, the LTEE environment is so extremely simple that one might reasonably expect the bacteria would evolve by breaking many existing functions. That is because the cells could, without consequence, lose their abilities to exploit resources not present in the flasks, lose their defenses against absent predators and competitors, and lose their capacities to withstand no-longer-relevant extreme temperatures, bile salts, antibiotics, and more. [Emphasis in the original.]
In other words, there are many tools in the robust E. coli genomic toolbox that wouldn't be needed in the Michigan State lab. It could lose them without immediate consequence. In fact, there may even be some benefit to losing them, either by simply saving the energy of making them, or by diverting resources to other pathways that are more heavily used in the lab environment.

Comment: Behe's previous responses to Lenski:


Designer organelles bring new functionalities into cells

organelle city
© Gemma Estrada Girona
The authors see the newly developed synthetic host as a city. On the one hand, typical cellular processes -- seen as encapsulated, isolated, and made up of non-interchangeable elements -- are represented as repetitive structures: squared, isolated blocks which are always fenced, just like membranous organelles. On the other hand, the image highlights the making of a new organelle -- a new building that is not fenced -- which is accessible to the rest of the city while having its own identity, a building which is more dynamic and flexible.
For the first time, scientists have engineered the complex biological process of translation into a designer organelle in a living mammalian cell. Research by the Lemke group at the European Molecular Biology Laboratory (EMBL) -- in collaboration with JGU Mainz and IMB Mainz -- used this technique to create a membraneless organelle that can build proteins from natural and synthetic amino acids carrying new functionality. Their results -- published in Science on 29 March -- allow scientist to study, tailor, and control cellular function in more detail.

During evolution, the development of new organelles allows cells and organisms to become more complex, due to the ability to sort cellular processes into specific hotspots. "Our tool can be used to engineer translation, but potentially also other cellular processes like transcription and post-translational modifications. This might even allow us to engineer new types of organelles that extend the functional repertoire of natural complex living systems," explains Christopher Reinkemeier, PhD student at EMBL and JGU Mainz and co-first author of the paper. "We could for example incorporate fluorescent building blocks that allow a glimpse inside the cell using imaging methods."

"The organelle can make proteins by using synthetic non-canonical amino acids. Currently we know of more than 300 different non-canonical amino acids -- compared to 20 which are naturally occurring. We are no longer restricted to the latter ones," says co-first author Gemma Estrada Girona. "The novelty we introduce is the ability to use these in a confined space, the organelle,, which minimises the effects on the host."

Comment: Imagine the amount of effort and trouble went into designing such a component, yet a Darwinist approach expects us to believe this happened by random processes. See also:

Microscope 2

Quantum machine appears to defy the push to disorder - remembers its ordered state

quantum lab
© Mikhail Lukin
A view of the lab where researchers built the 51-qubit quantum simulator.
Given enough time, even a tidy room will get messy. Clothes, books and papers will leave their ordered state and scatter across the floor. Annoyingly, this tendency toward untidiness reflects a law of nature: Disorder tends to grow.

If, for example, you cut open a pressurized scuba tank, the air molecules inside will spew out and spread throughout the room. Place an ice cube in hot water and the water molecules frozen in the ordered, crystalline lattice will break their bonds and disperse. In mixing and spreading, a system strives toward equilibrium with its environment, a process called thermalization.

It's common and intuitive, and precisely what a team of physicists expected to see when they lined up 51 rubidium atoms in a row, holding them in place with lasers. The atoms started in an orderly pattern, alternating between the lowest-energy "ground" state and an excited energy state. The researchers assumed the system would quickly thermalize: The pattern of ground and excited states would settle almost immediately into a jumbled sequence.

And at first, the pattern did jumble. But then, shockingly, it reverted to the original alternating sequence. After some more mixing, it returned yet again to that initial configuration. Back and forth it went, oscillating a few times in under a microsecond - long after it should have thermalized.

It was as if you dropped an ice cube in hot water and it didn't just melt away, said Mikhail Lukin, a physicist at Harvard University and a leader of the group. "What you see is the ice melts and crystallizes, melts and crystallizes," he said. "It's something really unusual."

Physicists have dubbed this bizarre behavior "quantum many-body scarring." As if scarred, the atoms seem to bear an imprint of the past that draws them back to their original configuration over and over again.

Comment: That sounds a lot like memory. But how can atoms have memory? How are they able to bear such an 'imprint'? As the scientists quoted below make clear, no one knows. Perhaps it has to do with something most scientists don't let into their theorizing. Maybe the mysterious scarring is the physical trace of a fundamental form of memory in the most basic kinds of matter? But in order to think that, you would have to consider panpsychism as a real possibility.


Bio-engineered blood vessels that are self-sustaining

Bio-engineered blood vessel
© Kirkton et al
A bio-engineered blood vessel (left) and the different cell structures that soon come to inhabit it.
Researchers have bio-engineered blood vessels that when implanted into a patient are quickly colonised by native cells and become self-sustaining.

A team led by Robert Kirkton of US-based regenerative medical tech company Humacyte Inc created biodegradable scaffolds in the form of blood vessels and then seeded them human vascular cells before incubating them in a bioreactor for eight weeks.

After the incubation, Kirkton and his colleagues removed all the cellular material, leaving behind what they term human acellular vessels (HAVs).

In a four year phase II clinical trial, the HAVs were implanted into 60 patients with end-stage kidney failure, where they served as entry ports for hemodialysis treatments - an approach which requires access to healthy blood vessels.


Michael Behe responds to his Lehigh colleagues: Molecular machines really are machines

Lehigh University campus
© Joseph Giansante ’76 / Wikimedia Commons
Lehigh University campus
Recently two of my Lehigh University Department of Biological Sciences colleagues published a seven-page critical review of Darwin Devolves in the journal Evolution. As I'll show below, it pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data, as well as how they see opposition to their views, and so it is a possible opening to mutual understanding. This is the third of a three-part reply. It continues directly from Part 2. See here for Part 1.

Of Course Proteins Are Machines

A basic difference between the views of Greg Lang and Amber Rice and my own concerns the nature of the molecular foundation of life. They object that I consider many biochemical systems to be actual machines. They quote a line from Darwin Devolves stating that protein systems are "literal machines - molecular trucks, pumps, scanners, and more." They write disapprovingly that the book claims "rod cells are fiber optic cables ... The planthopper's hind legs are a 'large, in your face, interacting gear system.'" They do concede that I didn't make up those claims about the machine-like nature of the systems out of whole cloth: "Most of the analogies in Darwin Devolves are not Behe's creation - he has done well to scour press coverage and the scientific literature for relatable metaphors; and he is generous with their use." Nonetheless, they say, "reality remains: proteins are not machines, a flagellum is not an outboard motor."

On this point they are simply wrong. "Molecular machine" is no metaphor; it is an accurate description. Unless Lang and Rice are arguing obliquely for some sort of vitalism - where the matter of life is somehow different from nonliving matter - then of course proteins and systems such as the bacterial flagellum are machinery. What else could they be? Although they aren't made of metal or plastic like our everyday tools, protein systems consist of atoms of carbon, oxygen, nitrogen, and so on - the same kinds of atoms as are found in inorganic matter, nothing special.

Comment: See the previous two parts here:

2 + 2 = 4

Michael Behe responds to his Lehigh colleagues on the true likelihood of degradative mutation

Lehigh University campus
© IR393DEME / Wikimedia Commons
Lehigh University campus
Recently two of my Lehigh University Department of Biological Sciences colleagues published a seven-page critical review of Darwin Devolves in the journal Evolution. As I'll show below, it pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data, as well as how they see opposition to their views, and so it is a possible opening to mutual understanding. This is the second of a three-part reply. It continues directly from Part 1.

A Limited Accounting of Degradation

Greg Lang and Amber Rice cite a number of articles to show that loss-of-function mutations are just a small minority of those found in studies of organisms.
However, the truth is that loss of function mutations account for only a small fraction of natural genetic variation. In humans only ∼3.5% of exonic and splice site variants (57,137 out of 1,639,223) are putatively loss of function, and a survey of 42 yeast strains found that only 242 of the nearly 6000 genes contain putative loss of function variants. Compared to the vast majority of natural genetic variants, loss of function variants have a much lower allele frequency distribution.
Yet those three studies they cite all search only for mutations that are pretty much guaranteed to totally kill a gene or protein. For example, one paper says:
We adopted a definition for LoF variants expected to correlate with complete loss of function of the affected transcripts: stop codon-introducing (nonsense) or splice site-disrupting single-nucleotide variants (SNVs), insertion/deletion (indel) variants predicted to disrupt a transcript's reading frame, or larger deletions ...
That's akin to counting only burnt-out shells of wrecked cars as examples of accidents that degrade an auto, while ignoring fender benders, flat tires, and so on. There are many more mutations that would not be picked up by the researchers' methods that nonetheless would be expected to seriously degrade or even destroy the function of a protein. Since the rates leading to the kinds of mutations in the cited papers are likely to be at least ten-fold lower than general point mutations in the gene (which, again, the study passed over) there may be many more genes - perhaps five- to ten-fold more (about a quarter to a half of mutated genes) - that have been degraded or even functionally destroyed. Further research is needed to say for sure. (I know which way I'll bet.) The remaining fraction of mutated genes in the population is likely to consist mostly of selectively neutral changes, neither helping nor hurting the organism, and not contributing anything in themselves to the fitness of the species.

Comment: See the first part of Behe's response here: Michael Behe responds to his Lehigh colleagues' inability to grasp the first rule of adaptive evolution