You are here

News Feeds

It's Time for Jupiter's Annual Checkup by Hubble

Universe Today Feed - Fri, 03/15/2024 - 7:48am

Each year, the Hubble Space Telescope focuses on the giant planets in our Solar System when they’re near the closest point to Earth, which means they’ll be large and bright in the sky. Jupiter had its photos taken on January 5-6th, 2024, showing off both sides of the planet. Hubble was looking for storm activity and changes in Jupiter’s atmosphere.

The images are part of OPAL, the Outer Planet Atmospheres Legacy program. These yearly images provide a long-time baseline of observations of the outer planets, helping to understand their atmospheric dynamics and evolution as gas giants. Jupiter was at perigee — its closest point to Earth — back in November 2023.

Jupiter’s colorful clouds present an ever-changing medley of shapes and colors, as it is the stormiest place in the Solar System. Its atmosphere is tens of thousands of kilomters/miles deep, and this stormy atmosphere gives the planet its banded appearance. Here you can find cyclones, anticyclones, wind shear, and other large and fantastic storms.

The largest and most famous storm on Jupiter is the Great Red Spot. In the image on the left, you can see the Great Red Spot and a smaller spot to its lower right known as Red Spot Jr. The two spots pass each other every two years on average. In the right image, several smaller storms are rotating in alternating atmospheric bands.

“The many large storms and small white clouds are a hallmark of a lot of activity going on in Jupiter’s atmosphere right now,” said OPAL project lead Amy Simon of NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

This 12-panel series of Hubble Space Telescope images, taken January 5-6, 2024, presents snapshots of a full rotation of the giant planet Jupiter. The Great Red Spot can be used to measure the planet’s real rotation rate of nearly 10 hours. The innermost Galilean satellite, Io is seen in several frames, along with its shadow crossing over Jupiter’s cloud tops. Hubble monitors Jupiter and the other outer solar system planets every year under the Outer Planet Atmospheres Legacy program. Credit: NASA, ESA, Joseph DePasquale (STScI).

NASA explains that the bands are produced by air flowing in different directions at various latitudes with speeds approaching 560 km/h (350 miles per hour). Lighter-hued areas where the atmosphere rises are called zones, while the darker regions where air falls are called belts. When these opposing flows interact, storms and turbulence appear.

Hubble tracks these dynamic changes every year (see a few of our previous articles about Hubble’s view of Jupiter here, here and here.) There is always lots of activity and changes taking place from year to year.

Toward the far-left edge of the right-side image is Jupiter’s tiny moon Io. The variegated orange color is where volcanic outflow deposits are seen on Io’s surface.

Side-by-side images show the opposite faces of Jupiter. The largest storm, the Great Red Spot, is the most prominent feature in the left bottom third of this view. Credit: NASA, ESA, Amy Simon (NASA-GSFC).

The post It's Time for Jupiter's Annual Checkup by Hubble appeared first on Universe Today.

Categories: Science

Single mathematical model governs primate brain shape across species

New Scientist Feed - Fri, 03/15/2024 - 7:00am
An analysis of primate brains shows that the pattern of folds on the surface follows the same mathematical pattern across species
Categories: Science

This is a 1.3 Gigapixel Image of a Supernova Remnant

Universe Today Feed - Fri, 03/15/2024 - 6:37am

Stars more massive than the Sun blow themselves to pieces at the end of their life. Usually leaving behind either a black hole, neutron star or pulsar they also scatter heavy elements across their host galaxy. One such star went supernova nearly 11,000 years ago creating the Vela Supernova Remnant. The resultant expanding cloud of debris covers almost 100 light years and would be twenty times the diameter of the full Moon. Astronomers have recently imaged the remnant with a 570 megapixel Dark Energy Camera (DECam) creating a stunning 1.3 gigapixel image. 

The Vela supernova remnant is visible in long exposure photographs in the constellation Vela. It is the result of a star more massive than the Sun reaching the end of its life. As the progenitor star evolved the fusion deep in its core ceased. The lack of fusion means the cessation of the outward pushing thermonuclear force, the star instantly implodes under the immense force of gravity. The inward rushing material rebounds leading to the supernova explosions we see. The shockwave from the event is still travelling through the surrounding gas cloud thousands of years later. 

The image recently released is one of the largest images ever taken of the object with the DECam camera. The instrument, built by the Department of Energy, was mounted upon the 4 metre Victor M Blanco telescope in Chile. It reveals amazing levels of detail with red, yellow and blue tendrils of gas. The image was taken through three colour filters in a technique familiar to amateur astronomers. The filters capture specific wavelengths of light and are then stacked on top of each other during processing to reveal the stunning high resolution colour image. 

The Dark Energy Camera mounted on CTIO’s Blanco 4-meter telescope. Credit: DOE/FNAL/DECam/R. Hahn/CTIO/NOIRLab/NSF/AURA

Supernova explosions of this type take hundreds of thousands of years for the effects to dissipate however the core of the collapsed star does remain. As the star collapses, the core is compressed leaving an ultra dense sphere of neutrons, the result of protons and electrons having been forced together under extreme pressures. The Vela Pulsar is only a few kilometres across but contains as much mass as the Sun. The stellar remnant is rotating rapidly, sweeping out a powerful beam of radiation across the Galaxy at a speed of 11 times per second.  

Previous images from other instruments highlight the incredible capabilities of DECam.  Coupled up to the 4 metre telescope in Chile, it operates like a conventional camera. Light enters the telescope and is redirected back up the tube by the large mirror. The light passes into DECam, through a 1 metre corrective lens and then arrives at its final destination, a grid of 62 charge-coupled devices. These little sensor generate current dependent on the amount of light that falls upon them. With an array of these sensors (570 million of them to be exact), a high resolution image can be recreated!

Source : Ghostly Stellar Tendrils Captured in Largest DECam Image Ever Released

The post This is a 1.3 Gigapixel Image of a Supernova Remnant appeared first on Universe Today.

Categories: Science

Mental health conditions may accelerate ageing by damaging RNA

New Scientist Feed - Fri, 03/15/2024 - 6:00am
People with mental health conditions have greater amounts of damaged RNA than those without one, which might explain the link between the conditions and age-related diseases such as cancer
Categories: Science

Mental illness may accelerate ageing by damaging RNA

New Scientist Feed - Fri, 03/15/2024 - 6:00am
People with mental illness have greater amounts of damaged RNA than those without a mental health condition, which might explain the link between mental illness and age-related diseases such as cancer
Categories: Science

Searching For SUEP at the LHC

Science blog of a physics theorist Feed - Fri, 03/15/2024 - 5:20am

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward from one of the proton-proton collisions at the Large Hadron Collider [LHC] — was made public by physicists working at the CMS experiment. As a theoretical idea, SUEP has its origin in 2006-2008, but it was this paper from 2016 that finally brought the possibility to widespread attention. (However, the name they gave it was unfortunate. To replace it, the acronym “SUEP” was invented.)

How can SUEP arise? If a proton-proton collision produces currently-unknown types of particles that

  • do not interact with ordinary matter directly (i.e. they are immune to the electromagnetic, strong nuclear and weak nuclear forces),
  • but do interact with each other, via their own, ultra-powerful force,

they can cause that collision to turn to SUEP.

While the familiar strong nuclear force mainly produces large numbers of particles in narrow sprays, known as jets, a new ultra-strong force could produce even larger numbers of particles, with relatively less energy-per-particle, arranged in near-spherical blasts. I gave a somewhat detailed description of SUEP in this post. (In fact, SUEP is a prediction of string theory — though, I hasten to add, one that has nothing to do with whether string theory describes quantum gravity in our universe.)

Below are shown two events, one with SUEP and one without, simulated back in 2007. Can you see the difference? (In these crude images, the darker lines represent higher-energy particles, and energy deposits are drawn in orange. You can see that the picture at right has lower-energy particles, more numerous and distributed more symmetrically, than the picture on the left.)

(Left) A busy non-SUEP event, with many jets of high-energy particles. (Right) A SUEP-like event, with particles that have lower energy, are more numerous, and are more broadly spread around. Simulation by the author in 2007, using a modified form of PYTHIA 6; originally presented here and here.

Now here’s some real data. Below is a typical (though still quite active) proton-proton collision at CMS. The yellow tracks show where the particles went. You can see that not all the tracks are straight (in contrast to my simulated events above). That’s because inside CMS is a magnetic field, which bends the paths of charged particles. The less energy a particle has, the more it curves. In this event, a substantial fraction of the tracks are straight and are clustered into narrow sprays (with orange cones drawn around them to guide the eye). These are the typical jets of mostly-high-energy particles created by the Standard Model’s strong nuclear force.

Now, here’s another real event observed at CMS, a truly amazing proton-proton collision that created an exceptional number of particles. Although there is a chance that it is SUEP, it’s probably just an extraordinary, rare process created by the strong nuclear force. Notice that almost all the tracks curve — these particles each have relatively low energy — and there are hardly any clusters of tracks similar to the ones above.

As was the case for the Higgs boson, a single suggestive picture is not enough. Discovery of SUEP would require many such SUEP-y proton-proton collisions be observed, in order that they could be distinguished, statistically, from known phenomena. (To be fair, there are some types of SUEP where just two or three events would suffice. But that’s a story for another day.)

No Discovery… But Still, Congratulations

Had this search actually found some evidence of SUEP, you would have seen it in news headlines. But it came up empty, as is the case for most scientific quests for new things. Nevertheless, despite a lack of a discovery, congratulations to CMS are due. This was a first-of-its-kind search, employing novel methods. Here’s CMS’s own description of their search.

Meanwhile, the story of SUEP is not over. CMS only looked for certain kinds of SUEP, and there are many more. A variety of hunting strategies will be needed in future, in order to cover all the possibilities.

The Current Status of the LHC Program

More generally, I want to highlight the significance and role of novel search strategies at the LHC experiments. This issue is often underestimated or misunderstood.

At the moment, and for the last few years, the central question facing LHC experimenters and their theoretical-physicist colleagues is this:

In 2012-2016, the discovery and initial examination of the Higgs boson completed the Standard Model. Since that time, nothing outside the Standard Model has been observed at the LHC. But it’s crucial to remember that although finding something proves that it exists, not finding it does not prove it does not exist.

It’s similar to trying to find a set of keys that might be in your house. If you find them right away, your search is over. But if you don’t find them right away, you can’t conclude they’re not in the house. You need to keep looking; maybe you haven’t looked in the right place yet. You have to search as carefully as you can, covering all locations and considering all possibilities, before you conclude that they simply must be elsewhere.

A single failure of the Standard Model would bring its reign to an end, and answer the central question in the negative. But to answer it with a reasonably confident “Yes” will require a thorough plan of searches for a wide variety of possible phenomena. If our search strategy leaves loopholes, we simply won’t be able to answer the central question with either a “No” or a “Yes”! And then we’ll be left in limbo.

Importantly, making the LHC search programs more thorough isn’t expensive. In fact, it’s more expensive not to make them thorough.

Each experiment’s data is collected as a giant pile, and each search for new phenomena involves examining one of the already-assembled giant piles through a particular lens. If we don’t hunt for everything reasonable in those data sets, then we’re partly wasting the time, effort and money that we spent to obtain them!

And there’s no reason for undue pessimism that none of these searches will find anything. Even a dramatic new phenomenon like SUEP can lie hidden in a vast data set, undetected until the moment that someone searches the data in just the right way.

That’s why this first SUEP search is important: it’s a novel way of exploring the LHC’s data. It pushes the boundary of what we know in a previously unexplored direction, and sets a new frontier for future investigation.

Categories: Science

What Is a Grand Conspiracy?

neurologicablog Feed - Fri, 03/15/2024 - 5:09am

Ah, the categorization question again. This is an endless, but much needed, endeavor within human intellectual activity. We have the need to categorize things, if for no other reason than we need to communicate with each other about them. Often skeptics, like myself, talk about conspiracy theories or grand conspiracies. We also often define exactly what we mean by such terms, although not always exhaustively or definitively. It is too cumbersome to do so every single time we refer to such conspiracy theories. To some extent there is a cumulative aspect to discussions about such topics, either here or, for example, on my podcast. To some extent I expect regular readers or listeners to remember what has come before.

For blog posts I also tend to rely on links to previous articles for background, and I have little patience for those who cannot bother to click these links to answer their questions or before making accusations about not having properly defined a term, for example. I don’t expect people to have memorized my entire catalogue, but click the links that are obviously there to provide further background and explanation. Along those lines, I suspect I will be linking to this very article in all my future articles about conspiracy theories.

What is a grand conspiracy theory? First a bit more background, about categorization itself. There are two concepts I find most useful when thinking about categories – operational definition and defining characteristics. An operational definition is one that essentially is a list of inclusion and exclusion criteria, a formula, that if you follow, will determine if something fits within the category or not. It’s not a vague description or general concept – it is a specific list of criteria that can be followed “operationally”. This comes up a lot in medicine when defining a disease. For example, the operational definition of “essential hypertension” is persistent (three readings or more) systolic blood pressure over 130 or diastolic blood pressure over 80.

Operational definitions often rely upon so-called “defining characteristics” – those features that we feel are essential to the category. For example, how do we define “planet”? Well, astronomers had to agree on what the defining characteristics of “planet” should be, and it was not entirely obvious. The one that created the most controversy was the need to gravitationally clear out one’s orbit – the defining characteristic that excluded Pluto from the list of planets.

There is therefore some subjectivity in categories, because we have to choose the defining characteristics. Also, such characteristics may have fuzzy or non-obvious boundaries. This leads to what philosophers call the “demarcation problem” – there may be a fuzzy border between categories. But, and this is critical, this does not mean the categories themselves don’t exist or are not meaningful.

With all that in mind, how do we operationally define a “grand conspiracy” and what are the defining characteristics. A grand conspiracy has a particular structure, but I think the key defining characteristic is the conspirators themselves. The conspirators are a secret group that have way more power than they should have or any group realistically could have. Further they are operating for their own nefarious goals and are deceiving the public about their existence and their true goals. This shadowy group may operate within a government, or represents a shadow government themselves, or even a secret world government. They can control the media and other institutions as necessary to control the public narrative. They are often portrayed a diabolically clever, able to orchestrate elaborate deceptions and false flag operations, down to tiny details.

But of course there would be no conspiracy theory if such a group were entirely successful. So there must also be an “army of light” that has somehow penetrated the veil of the conspirators, they see the conspiracy for what it is and try to expose it. Then there is everyone else, the “sheeple” who are naive and deceived by the conspiracy.

That is the structure of a grand conspiracy. Functionally, psychologically, the grand conspiracy theory operates in order to insulate the belief of the “conspiracy theorist”. Any evidence that contradicts the conspiracy theory was a “false flag” operation, meant to cast doubt on the conspiracy. The utter lack of direct evidence for the conspiracy is due to the extensive ability of the conspirators to cover up any and all such evidence. So how, then, do the conspiracy theorists even know that the conspiracy exists? They rely on pattern recognition, anomaly hunting, and hyperactive agency detection – not consciously or explicitly, but that is what they do. They look for apparent alignments, or for anything unusual. Then they assume a hidden hand operating behind the scenes, and give it all a sinister interpretation.

Here is a good recent example – Joe Rogan recently “blew” his audience’s mind by claiming that the day before 9/11, Donald Rumsfeld said in a press conference that the Pentagon had lost 2.3 trillion dollars. Then, the next day, a plane crashes into the part of the Pentagon that was carrying out the very audit of that missing trillions. Boom – a grand conspiracy is born (of course fitting into another existing conspiracy that 9/11 was an inside job). The coincidence was the press conference the day before 9/11, which is not much of a coincidence because you can go anomaly hunting by looking at any government activity in the days before 9/11 for anything that can be interpreted in a sinister way.

In this case, Rumsfeld did not say the Pentagon lost $2.3 trillion. He was criticizing the outdated technology in use by the DOD, saying it is not up to the modern standards used by private corporations. An analysis – released to the public one year earlier – concluded that because of the outdated accounting systems, as much as 2.3 trillion dollars in the Pentagon budget cannot be accurately tracked and documented. But of course, Rogan is just laying out a sinister-looking coincidence, not telling a coherent story. What is he actually saying? Was Rumsfeld speaking out of school? Was 9/11 orchestrated in a single day to cover up Rumsfeld’s accidental disclosure? Is Rumsfeld a rebel who was trying to expose the coverup? Would crashing into the Pentagon sufficiently destroy any records of DOD expenditures to hide the fact that $2.3 trillion was stolen? Where is the press on this story? How can anyone make $2.3 trillion disappear? How did the DOD operate with so much money missing from their budget?

Such questions should act as a “reality filter” that quickly marks the story as implausible and even silly. But the grand conspiracy reacts to such narrative problems by simply expanding the scope, depth, and power of the conspiracy. So now we have to hypothesize the existence of a group within the government, complicit with many people in the government, that can steal $2.3 trillion from the federal budget, keep it from the public and the media, and orchestrate and carry our elaborate distractions like 9/11 when necessary.

This is why, logically speaking, grand conspiracy theories collapse under their own weight. They must, by necessity, grow in order to remain viable, until you have a vast multi-generational conspiracy spanning multiple institutions with secret power over many aspects of the world. Any they can keep it all secret by exerting unbelievable control over the thousands and thousands of individuals who would need to be involved. They can bribe, threaten, and kill anyone who would expose them. Except, of course, for the conspiracy theorists themselves, who can work tirelessly to expose them with fear, apparently.

This apparent contradiction has even lead to a meta conspiracy theory that all conspiracy theories are in fact false flag operations, meant to discredit conspiracy theories and theorists so that the real conspiracies can operate in the shadows.

Being a “grand” conspiracy is not just about size. As I have laid out, it is about how such conspiracies allegedly operate, and the intellectual approach of the conspiracy theorists who believe in them. This can fairly easily be distinguished from actual conspiracies, in which more than one person or entity agree together to carry out some secret illegal activity. Actually conspiracies can even become fairly extensive, but the bigger they get the greater the risk that they will be exposed, which they are all the time. Of course, we can’t know about the conspiracies that were never exposed, by definition, but certainly there are a vast number of conspiracies that do ultimately get exposed. It makes it hard to believe that a conspiracy orders of magnitude larger can operate for decades without similarly being exposed.

Ultimately the grand conspiracy theory is about the cognitive style and behavior of the conspiracy theorists – the subject of a growing body of psychological research.

The post What Is a Grand Conspiracy? first appeared on NeuroLogica Blog.

Categories: Skeptic

‘Sound laser’ is the most powerful ever made

New Scientist Feed - Fri, 03/15/2024 - 4:00am
A new device uses a reflective cavity, a tiny bead and an electrode to create a laser beam of sound particles ten times more powerful and much narrower than other “phonon lasers”
Categories: Science

Some Good, But Preliminary Real World Data on Those Baby RSV Shots

Science-based Medicine Feed - Fri, 03/15/2024 - 4:00am

The first post-rollout data for the RSV antibody shot looks pretty good, but far too many little ones missed out.

The post Some Good, But Preliminary Real World Data on Those Baby RSV Shots first appeared on Science-Based Medicine.
Categories: Science

Nancy Grace Roman will Map the Far Side of the Milky Way

Universe Today Feed - Fri, 03/15/2024 - 3:29am

The Galaxy is a collection of stars, planets, gas clouds and to the dismay of astronomers, dust clouds. The dust blocks starlight from penetrating so it’s very difficult to learn about the far side of the Galaxy. Thankfully the upcoming Nancy Grace Roman telescope has infrared capability so it can see through the dust. A systematic survey of the far side of the Milky Way is planned to see what’s there and could discover billions of objects in just a month. 

The Nancy Grace Roman telescope (NGRt) has been named after NASA’s inaugural chief astronomer who was known as the ‘mother of the Hubble Space Telescope.’ It will have a field of view at least 100 times that of Hubble giving it an impressive swathe of space in each capture. Not only will it be able to peer through dust clouds, it also has the capability to block out starlight enabling direct observation of exoplanets and other infrared observations. 

The incredible resolution of NGRt will help to identify individual stars within interstellar dust clouds even at the far reaches of the Galaxy. It’s expected the observations will lead to the creation of an extensive stellar catalogue of stars previously unseen. Even the mapping observatory satellite Gaia (from the European Space Agency) didn’t have the mapping and precision available from NGRt which will surpass it tenfold. The extraordinary work of Gaia mapped over a billion stars within a distance of about 10,000 light years. NGRt will go a step further and map over 100 billion stars out to 100,000 light years! As far as our Galaxy is concerned, there’s not much out of NGRt’s reach. Even Spitzer, NASA’s infrared space telescope had surveyed the Galactic plane, it did not have the resolution to resolve stars on the far side of the Galaxy.

The Spitzer Space Telescope observatory trails behind Earth as it orbits the Sun. Credit: NASA/JPL-Caltech

In 2021 calls were made for ideas for surveys and the Galactic Plane Survey was the top ranking idea. It is now down to the scientific community to pull together observational projects to support the survey. It’s impressive to think that the survey will be targeting 1,000 square degrees of sky, equivalent to 5,000 full moons. That might not sound like too much but it would pretty much allow for all the stars in our Galaxy to be surveyed. That might sound like a lifelong piece of work but NGRt is a telescope that means business, knocking out the survey in around a month!

Other observatories could of course undertake similar projects but it would take years for even Hubble or James Webb Space telescope to achieve the same results. They are far more suited to studying external galaxies and we have seen some incredible images revealing complex galactic structure. Our own Galaxy has rather been overlooked, but it’s actually quite difficult to study our own! The entire sky needs to be observed and then there is the obscuring effect of dust. ‘We have studied our own Solar System’s neighbourhood well’ says Catherine Zucker, co-author of a white paper entitled ‘Roman Early-Definition Astrophysics Survey Opportunity’ and astrophysicist at the Center for Astrophysics Harvard & Smithsonian. ‘We have a very incomplete view of what the other half of what the Milky Way looks like beyond the Galactic centre.’ she went on to say. 

NGRt is due for launch in 2027 and, if all goes to plan, looks set to deliver not only some exciting science but the first time view of objects on the far side of the Galaxy.

Source :  NASA’s Roman Team Selects Survey to Map Our Galaxy’s Far Side

The post Nancy Grace Roman will Map the Far Side of the Milky Way appeared first on Universe Today.

Categories: Science

Psychotherapy Redeemed: A Response to Harriet Hall’s “Psychotherapy Reconsidered”

Skeptic.com feed - Fri, 03/15/2024 - 12:00am

While not going so far as arguing, as some have, that psychotherapy is always effective, I’d like to present some data and offer some contrasting considerations to Harriet Hall’s article: “Psychotherapy Reconsidered” (in Skeptic 28.1). Probably no other area within social science practice has been so inordinately and unfortunately praised and damned. Many of us working in the field have long been acutely aware of the difficulties to which Hall and others point, as well as other problems. However, we also regularly observe the positive changes in clients’ lives that psychotherapy—properly practiced—has produced, and in many cases, the lives it has saved.

In her article, the late Harriet Hall, whose work I and all skeptics admire and now miss, stated that no-one can provide an objective report about the field, indeed, that there “…aren’t even any basic numbers,” that we don’t know whether psychotherapy works, that it is not based on solid science, and that there is “…no rational basis for choosing a therapy or therapist.”

Hall and other sources she quotes are quite correct in saying that there is much we still don’t know about human psychology, and much that we don’t understand about how the mind and psychotherapy work. Yet it’s also necessary to look at the data and analyses which demonstrate that psychotherapy does work. The case for the defense is made in detail in The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work by Bruce Wampold and Zac Imel, and also in Psychotherapy Relationships That Work by Wampold and John Norcross, both of which present decades of meta-analyses. They review conclusions from an impressive number of psychotherapy studies and show how humans heal in a social context, as well as offer a compelling alternative to the conventional approach to psychotherapy research, which typically concentrates on identifying the most effective treatment for specific disorders by placing an emphasis on the particular components of treatment.

This is a misguided point in Hall’s argument, as she was looking at the differences between treatments rather than between therapists. Studies that previously claimed superiority over one method to another ignored who the treatment provider was.1 We know that these wrong research questions arise from using the medical model where it is imperative to know which treatment is the most effective for a particular disorder. In psychotherapy, and to some extent in medicine generally, the person administering the treatment is absolutely critical. Indeed, in psychotherapy the most important factor is the skill, confidence, and interpersonal flexibility of the therapist delivering the treatment, not the model, method, or “school” they use, their number of years in practice, or even the amount of professional development they’ve had. How we train and supervise therapists largely has little impact on the outcomes of psychotherapy, unless each therapist routinely collects outcome data in every session and adjusts their approach to accommodate each client’s feedback.

The Bad News About Psychotherapy

Hall is right on the point that psychotherapy outcomes have not improved much over the last 50 years. Hans Eysenck’s classic study debunking psychotherapy was performed in 1952.2 His view was not challenged until 1977, when a meta-analysis showed that psychotherapy was effective, and that Eysenck was wrong.3 It found the effect size (ES) for psychotherapy was .8 above the mean of the untreated sample. Recent meta-analyses show that this ES has remained the same over the intervening 50 years, despite the proliferation of diagnoses and treatment models.4

Hall was also accurate in saying that much conflicting data exists from studies about the efficacy of the hundreds of types of psychotherapy. Yet she was incorrect in saying that we don’t even have basic numbers. We now have decades of meta-analyses showing what works and what doesn’t work in psychotherapy.5, 6, 7, 8, 9, 10

Hall was also mostly on-target when she stated, “…proponents of each modality of psychotherapy give us their…impressions about the success of their chosen method.” Decades of clinical trials comparing treatment A to treatment B point to the conclusion that all bona fide psychotherapy models work equally well. This is consistently replicated in trials comparing therapists who use two different yet coherent, convincing, and structured treatments, as long as these treatments provide an explanation for what’s bothering the client in addition to discussing a treatment plan for the client to work hard at overcoming their difficulties. Psychotherapy research clearly shows that all models contribute 0–1 percent towards the outcomes of psychotherapy.11 This means that proponents of Cognitive Behavioral Therapy—or any model—claiming its superiority to other treatments, are not basing their claims on the available evidence.

Another correct statement of Hall’s is that most therapists have no evidence to show that what they’re doing is effective. This lack of evidence led others to conclude that, “Beyond personal impressions and informal feedback, the majority of therapists have no hard, verifiable evidence that anything they do makes a difference…Absent a valid and reliable assessment of their performance, it also stands to reason they cannot possibly know what kind of instruction or guidance would help them improve.”12

For decades, free pen-and-paper measures by which therapists can track their outcomes have been available,13 recently superseded by online versions.14 These Feedback Informed Treatment (FIT) online platforms are easy to use and have been utilized by thousands of therapists around the world to get routine feedback from every client on each session. The result: Data from hundreds of thousands of clients is continually being updated. Regrettably, those of us who use these methods are still a small minority of therapists practicing around the world compared to the unknown numbers who, as Hall rightly pointed out, provide psychotherapy in its manifold (and perhaps unregulated) forms.

The online outcome measurement platforms mentioned above are recommended by the International Center for Clinical Excellence (ICCE).15 For decades, the ICCE has been aggregating data from therapists around the world and so providing evidence that corroborates some of Hall’s critical claims about psychotherapy. Current data show that dropout rates, defined as clients unilaterally stopping treatment without experiencing reliable clinical improvement, are between 20–22 percent among adult populations (even when therapists use FIT).16 Dropout rates are typically higher (40–60 percent) for child and adolescent populations. This raises the unfortunate possibility that dropout rates for therapists who don’t get routine feedback from clients are probably higher still.

Hall was, however, incorrect in stating that we don’t know about the harms of psychotherapy. There are many examples of discussions and analyses of what doesn’t work in psychotherapy and what can cause harm.17 One study of aggregated data shows that the percentage of people who are reliably worse while in treatment is 5–10 percent.18

Regrettably, the data indicate that the average clinician’s outcomes plateau relatively early in their career, despite their thinking they are improving. One review found no evidence that therapists improve beyond their first 50 hours of training in terms of their effectiveness, and a number of studies have found that paraprofessionals with perhaps six weeks of training achieve outcomes on par with psychologists holding a PhD, which is equal to five years of training.19 These data support Hall’s statement that unless they are measuring their outcomes, no therapist knows whether their method is more (or less) effective than the methods used by others. Even then, it leads to a conflation that it’s due to the method instead of the therapist. Studies also show that students often achieve outcomes that are on par or better than their instructors. These facts are amply demonstrated in Witkowski’s discussion with Vikram H. Patel,20 whose mental health care manual Where There Is No Psychiatrist is used primarily in developing countries by non-specialist health workers and volunteers.21

Further, there is now evidence that psychotherapists who have been in practice for a few years see themselves as improving even though the data show no such improvement.22 Psychotherapists are not immune either to cognitive biases or to the Dunning-Kruger effect, and a majority rate themselves as being above average. In other words, psychotherapists generally overestimate their abilities. Finally, meta-analyses show that there is a large variation in effectiveness between clinicians, with a small minority of top performing therapists routinely getting superior outcomes with a wide range of clients. Unfortunately, these “supershrinks” are a rare breed.23

To balance the bad news above, following is some of the data which shows that psychotherapy works.

The Good News About Psychotherapy

Psychotherapy works. It does help people. Since Eysenck’s time and in response to the numerous sources cited by Hall, many studies have demonstrated that the average treated client is better off than eighty percent of the untreated sample.24 That doesn’t mean that psychotherapy is eighty percent effective, but it does mean that if you take the average treated person and you compare them to those in an untreated sample, that average treated person is doing better than eighty percent of people in the untreated sample. This effect size means that psychotherapy outcomes are equivalent to those for coronary artery bypass surgery and four times greater than those for the use of fluoride in preventing tooth decay. As discussed earlier, this has remained constant for 50 years, regardless of the problem being tested or the method being employed.

Just as in surgery, the tools that psychotherapists use are only as effective as the hands that use them. How effective are psychotherapists? Real world studies have looked at this question, asking clinicians to measure their outcomes on a routine basis with each client in every session. They’ve compared these outcomes against those in randomized clinical trials (RCTs). It must be noted that in RCTs researchers have many advantages that real world practitioners do not. These include: (a) a highly select clientele, in that many published studies have a single unitary diagnosis while clinicians routinely deal with clients with two or more comorbidities; (b) they have a lower caseload; and (c) they have ongoing supervision and consultation with some of the world’s leading experts on psychotherapy. Despite all this, the data documents that psychotherapy outcomes are equivalent with those of RCTs.25

Therapists around the world, including me, have been using Feedback Informed Treatment (FIT) for decades. I have been seeing clients since 1981 and my clinical outcomes started to improve when I started incorporating FIT into my practice nearly 20 years ago. Those of us who use FIT routinely get quantitative feedback from every client at the beginning of every session. We ask about the client’s view of the outcomes of therapy in four areas of their life: (1) their individual wellbeing; (2) their close personal relationships; (3) their social interactions; and (4) their overall functioning. This measure is termed the Outcome Rating Scale or ORS.26 At the end of every session, we also get quantitative feedback about four items to gauge the client’s experience of: (1) whether they felt heard, understood, and respected by us in that session; (2) whether we talked about what the client wanted to discuss; (3) whether the therapist’s approach/method was a good fit for the client; and (4) an overall rating for the session, also asking if there was anything missing in that session. This measure is termed the Session Rating Scale or SRS.27 The resulting feedback is successively incorporated into the therapy, ensuring that the client’s voice and preferences are privileged.

Research shows that individual therapists vary widely in their ability to achieve positive outcomes in therapy, so which therapist a client sees is a big factor in determining the outcome of their therapy. Data gathered over a 2.5-year period from nearly 2,000 clients and 91 therapists documented significant variation in effectiveness among the clinicians in the study and found certain high-performing therapists were 10 times more effective than the average clinician.28 One variable that strongly accounted for this difference in outcome effectiveness was the amount of time these therapists devoted outside of therapy to deliberately practicing objectives which were just beyond their level of proficiency.29

What these studies show is that we’ve been looking in the wrong place for the answers as to why the outcomes of psychotherapy have not improved over the last 50 years. We’ve been studying the effects within the therapy room rather than what happens outside of the therapy room, i.e., what clients bring into their therapy and what therapists do before and after they see their clients.

Indeed, clients and their extra-therapeutic factors contribute 87 percent to outcomes of psychotherapy!30 Extra-therapeutic factors comprise the client’s personality, their daily environment, their friends, family, work, good relationships, and community support. On average clients spend less than one hour per week with a therapist. The extra-therapeutic factors are the components of the client’s life to which they return, and which make up the other 167 hours of their week. This begs the question “does this mean that there’s nothing we can do about it?” The key is for therapists to a) attune to these outside factors and resources, and b) tap into them. The remaining 13 percent of treatment effects which accounts for positive outcomes in therapy is made up of: the individual therapist, between 4–9 percent; the working alliance (relationship) between therapist and client, 4.9–8 percent; the expectancy/placebo and rationale for treatment, 4 percent; while the model of therapy contributes an insignificant 0–1 percent. This highlights that who the therapist is and how they relate to their clients is the main variable accounting for positive outcomes outside of the client’s extra-therapeutic factors.

So, how should you choose a therapist?

There is now a movement led by eminent researchers, educators, policymakers, and supervisors in the psychotherapy field to ensure that after graduation therapists consciously and intentionally engage in ongoing Deliberate Practice—critically analyzing their own skills and therapy session performance, continuously practicing their skillset (particularly training their in-the-moment responses to emotionally challenging clients and situations), and seeking expert feedback. Deliberate Practice is based on K. Anders Ericsson’s (who made a name for himself as “the expert on expertise”) three decades of research on the components of expertise in many domains of activity, including in sport, medicine, music, mathematics, business, education, computer programming, and other fields. Building on research in other professional domains such as sports, music, and medicine, a 2015 study was conducted to understand what differentiated top performing therapists from average ones.31 It found that top performing therapists spent 2.5 times more time in Deliberate Practice before and after their client sessions than did average therapists, and 14 times more time in Deliberate Practice than the least effective therapists!

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Experts in the field encourage therapists, supervisors, educators, and licensing bodies to “change the rules” about how psychotherapists are trained and how psychotherapy is practiced.32 The research reviewed here highlights that we can do this in two main ways: first, by making our clients’ voices the central focus of psychotherapy by routinely engaging in Feedback Informed Treatment with every client in every session to create a culture of feedback; and second, by each therapist receiving guidance from a coach who uses Deliberate Practice. To ensure accountability to clients, health insurance companies, and the psychotherapy field itself, this should be the basis for all practice, training, accreditation, and ongoing licensing of therapists.

In summary, psychotherapy does work. For readers who are curious to explore why psychotherapy works and which factors contribute to it doing so, I’d highly recommend Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness33 and its accompanying Field Guide to Better Results.34

About the Author

Vivian Baruch is a relationship coach, counselor, psychotherapist, and clinical supervisor specializing in relationship issues for singles and couples. She has been practicing since 1981, has been a psychotherapy educator at the Australian College of Applied Psychology, and taught supervision to psychotherapists at the University of Canberra. In 2004, she trained with Scott D. Miller, and has been using Feedback Informed Treatment (FIT) for 20 years to routinely incorporate her clients’ feedback into her psychotherapy and supervision work.

References
  1. https://rb.gy/iw4yb
  2. https://rb.gy/4y3su
  3. https://rb.gy/bc9u9
  4. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  5. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  6. Norcross, J. C., & Lambert, M. J. (Eds.). (2019). Psychotherapy Relationships That Work: Volume 2: Evidence-Based Therapist Responsiveness. Oxford University Press.
  7. https://rb.gy/qm2hz
  8. https://rb.gy/x7bm9
  9. https://rb.gy/rfq74
  10. https://rb.gy/rz91t
  11. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  12. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  13. https://rb.gy/edpb6
  14. https://rb.gy/ktioc
  15. https://rb.gy/2bjuy
  16. https://rb.gy/6f55y
  17. https://rb.gy/tpuo2
  18. https://rb.gy/uqp3k
  19. https://rb.gy/obhfg
  20. Witkowski, T. (2020). Shaping Psychology: Perspectives on Legacy, Controversy and the Future of the Field. Springer Nature.
  21. Patel, V. (2003). Where There Is No Psychiatrist: A Mental Health Care Manual. RCPsych publications.
  22. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  23. Ricks, D. F. (1974). Supershrink: Methods of a Therapist Judged Successful on the Basis of Adult Outcomes of Adolescent Patients. In D.F. Ricks, A. Thomas, & M. Roff (Eds.), Life History Research in Psychopathology: III. University of Minnesota Press.
  24. https://rb.gy/obhfg
  25. https://rb.gy/uulpw
  26. https://rb.gy/d5mbx
  27. Ibid.
  28. https://rb.gy/0hvy3
  29. https://rb.gy/rkr85
  30. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  31. https://rb.gy/ye406
  32. https://rb.gy/r2jb8
  33. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  34. https://rb.gy/f3c3e
Categories: Critical Thinking, Skeptic

A theory linking ignition with flame provides roadmap to better combustion engines

Matter and energy from Science Daily Feed - Thu, 03/14/2024 - 7:21pm
Researchers have theoretically linked ignition and deflagration in a combustion system, unlocking new configurations for stable, efficient combustion engines due to the possible existence of any number of steady-state solutions.
Categories: Science

Researchers prove fundamental limits of electromagnetic energy absorption

Matter and energy from Science Daily Feed - Thu, 03/14/2024 - 2:14pm
Electrical engineers have determined the theoretical fundamental limit for how much electromagnetic energy a transparent material with a given thickness can absorb. The finding will help engineers optimize devices designed to block certain frequencies of radiation while allowing others to pass through, for applications such as stealth or wireless communications.
Categories: Science

New study shows analog computing can solve complex equations and use far less energy

Matter and energy from Science Daily Feed - Thu, 03/14/2024 - 11:53am
A team of engineers has proven that their analog computing device, called a memristor, can complete complex, scientific computing tasks while bypassing the limitations of digital computing.
Categories: Science

New study shows analog computing can solve complex equations and use far less energy

Computers and Math from Science Daily Feed - Thu, 03/14/2024 - 11:53am
A team of engineers has proven that their analog computing device, called a memristor, can complete complex, scientific computing tasks while bypassing the limitations of digital computing.
Categories: Science

Vac to the future

Computers and Math from Science Daily Feed - Thu, 03/14/2024 - 11:53am
Scientists recently published the results of a competition that put researchers to the test. For the competition, part of the NIH-funded Computational Models of Immunity network, teams of researchers from different institutions offered up their best predictions regarding B. pertussis (whooping cough) vaccination.
Categories: Science

A new world of 2D material is opening up

Matter and energy from Science Daily Feed - Thu, 03/14/2024 - 11:53am
Materials that are incredibly thin, only a few atoms thick, exhibit unique properties that make them appealing for energy storage, catalysis and water purification. Researchers have now developed a method that enables the synthesis of hundreds of new 2D materials.
Categories: Science

What do home faecal test kits really reveal about our gut microbiome?

New Scientist Feed - Thu, 03/14/2024 - 11:00am
Many firms sell direct-to-consumer faecal testing kits, but an investigation has revealed that scientists don't yet know what makes for a healthy gut microbiome
Categories: Science

Bill Maher confers the 2024 Cojones Awards

Why Evolution is True Feed - Thu, 03/14/2024 - 10:45am

In this short seven-minute segment from last week’s “Real Time,” Bill Maher confers five “Cojones Awards” for having. . . .well, moxie. (Women can also get the Golden Testicles.)  You may recognize some of the winners, and of course, at the end, there’s the winner of the Lifetime Achievement Award, which I have to say is well deserved.

Categories: Science

Another Hycean Planet Found? TOI-270 d

Universe Today Feed - Thu, 03/14/2024 - 10:22am

Hycean planets may be able to host life even though they’re outside what scientists consider the regular habitable zone. Their thick atmospheres can trap enough heat to keep the oceans warm even though they’re not close to their stars.

Astronomers have found another one of these potential hycean worlds named TOI-270 d.

The word hycean is a portmanteau of ‘hydrogen’ and ‘ocean’ and it describes worlds with surface oceans and thick hydrogen-rich atmospheres. Scientists think that they may be common around red dwarfs and that they could be habitable, although any life that exists on a hycean world would be aquatic.

Because they contain so much water, scientists think they’re larger than comparable non-hycean planets. Their larger size makes them easier targets for atmospheric study by the JWST. Though hycean worlds are largely hypothetical now, the JWST is heralding a new era in planetary science and may be able to show that they do exist.

The telescope’s ability to characterize exoplanet atmospheres could be the key to confirming their existence. Using transmission spectroscopy, the space telescope can watch as starlight travels through their atmospheres, revealing the presence of certain important chemicals and even biosignatures.

The exoplanet TOI-270 d could be a hycean world, and a new paper presents evidence supporting that. The paper is “Possible Hycean conditions in the sub-Neptune TOI-270 d,” and it’s published in the journal Astronomy and Astrophysics. The authors are Måns Holmberg and Nikku Madhusudhan, both from the
Institute of Astronomy at the University of Cambridge.

“The JWST has ushered in a new era in atmospheric characterizations of temperate low-mass exoplanets with recent detections of carbon-bearing molecules in the candidate Hycean world K2-18 b,” the authors write. That was an important discovery, and the authors of this paper say the JWST has more to show us about exoplanet atmospheres. In this work, the pair of researchers examined two sub-Neptunes in the TOI-270 system as they transited their M-dwarf. “We report our atmospheric characterization of the outer planet TOI-270 d, a candidate Hycean world, with JWST transmission spectroscopy…,” they write.

TOI-270 is an M-dwarf (red dwarf) star about 73 light-years away. Red dwarfs are known to sometimes flare violently, ruling out habitability on nearby planets. However, the authors describe TOI-270 as a quiet star. It hosts three sub-Neptune planets, and the pair of outermost planets, TOI-270 c and d, are both candidate hycean worlds. TOI-270 d is considered the strongest candidate.

TOI-270 d is about 4.2 Earth masses and measures about 2.1 Earth radii. It takes just over 11 Earth days to complete an orbit, a fact that aids atmospheric study. The Hubble Space Telescope looked at TOI-270 d recently, and its observations suggested a hydrogen-rich atmosphere with some evidence of H2O. Those results warranted further examination with the more powerful JWST.

Though scientists still haven’t proven that hycean worlds exist, they know something about their atmospheric chemistry. On an ocean world with a thick, hydrogen-rich atmosphere, scientists expect to find strong signatures of CH4 (methane) and CO2 and no evidence of NH3 (ammonia.) This is what the JWST found at K2-18b, though there is still uncertainty if that exoplanet is a hycean world.

This graphic shows what the JWST found in the atmosphere of K2-18 b, a suspected hycean world. Image Credit: NASA, CSA, ESA, J. Olmstead, N. Madhusudhan

Every planet is different, but each type should have things in common. “For Hycean worlds, the presence of an ocean below a thin H2-rich atmosphere may be inferred by an enhancement of CO2, H2O, and/or CH4, together with a depletion of NH3,” the authors write. Since TOI-270 d is a candidate hycean world, its spectroscopy should be similar to other hycean candidates like K2-18b. “Therefore, for the Hycean candidate TOI-270 d, observations of these key carbon-, nitrogen-, and oxygen- (CNO) bearing molecules are required to assess whether or not it is a Hycean world,” the paper’s authors explain.

In October of 2023, the JWST observed TOI-270 b and d during two transits. The observations amounted to a total exposure time of 5.3 hours. “This rare event allows for transmission spectroscopy of both planets,” the authors write.

This figure from the study shows the spectra from both the Hubble Space Telescope and the JWST. The prominent molecules responsible for the features in different spectral regions are labelled. Image Credit: Holmberg and Madhusudhan 2024.

“Our atmospheric retrieval results support the inference of an H2-rich atmosphere on TOI-270 d and provide valuable insights into the abundances of dominant CNO molecules,” the authors explain. Furthermore, the abundances are similar to what the JWST found on K2-18 b, another suspected hycean world.

But when it comes to water, the results are less certain. “We found only tentative evidence of H2O, with the detection significance and abundance estimates varying…,” the authors write. The detection and abundance of H2O were more strongly dependent on what method the researchers used to analyze the data.

The appearance of CS2 (carbon disulphide) in TOI-270 d’s atmosphere is intriguing. It’s considered a detectable biomarker in hycean world atmospheres, as well as in hydrogen-rich atmospheres of rocky worlds, although the direct sources could also be volcanic or photochemical.

The atmospheric spectrum also contains hints of C2H6 (ethane.) Ethane can be a byproduct of photochemical reactions involving methane and other gases, including biogenic ones. Its presence is another indication that methane is present. The researchers also point out that the abundances of ethane and carbon disulphide are well above theoretical predictions. “More observations are required to robustly constrain the presence and abundances of both molecules,” they write.

All the researchers can conclude is that TOI-720 d is a candidate hycean world. But while the previous HST observations that hinted at its status showed the presence of H2O in an H2-rich atmosphere, the JWST observations provide more depth. The JWST’s more robust detections of CH4 and CO2, along with its non-detection of NH3, makes it an even stronger hycean world candidate.

“The planet stands out as a promising Hycean candidate, consistent with its initial predictions as a world with the potential for habitable oceans beneath an H2-rich atmosphere,” the authors conclude.

The post Another Hycean Planet Found? TOI-270 d appeared first on Universe Today.

Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator