Here’s a new article in the Journal of Sexual Medicine that investigated the effects of gender-changing surgery on both males and females (over 18) with a diagnosis of gender dysphoria. The results won’t make gender extremists happy, as in both cases rates of mental distress, including anxiety, and depression, were higher than those having surgery than those not having surgery after two years of monitoring. However, this doesn’t mean that the surgery shouldn’t be done, as the authors note that other studies show that people undergoing surgical treatment are, over the longer term, generally happy with the outcome. The main lesson of the paper is that people who do undergo such surgeries should be monitored carefully for post-surgical declines in mental health.
Click the headline below to read.
The authors note that there are earlier but much smaller studies that show no decline in mental health after surgery, but these are plagued not only by small sample size, but also by non-representative sampling reliance on self-report, and failure to diagnose other forms of mental illness beyond gender dysphoria before surgery. The present study, while remedying these problems, still has a few issues (see below).
The advantages of this study over earlier ones is that the samples of Lewis et al. are HUGE, based on the TriNetX database of over 113 million patients from 64 American healthcare organizations. Further, the patients were selected only because they had a diagnosis of gender dysphoria and no record of any other form of mental illness (of course, it could have been hidden). Patients were divided into four groups (actually six, but I’m omitting two since they lacked controls): natal males with gender dyphoria who had or didn’t have surgery, and natal females with and without surgery. Here are the four groups, and I’ve added the sample size to show how much data they have:
Cohort A: Patients documented as male (which may indicate natal sex or affirmed gender identity), aged ≥18 years, with a prior diagnosis of gender dysphoria, who had undergone gender-affirming surgery.
Cohort B: Male patients with the same diagnosis but without surgery. [Cohorts A and B had 2774 patients.]
Cohort C: Patients documented as female, aged ≥18 years, with a prior diagnosis of gender dysphoria, who had undergone gender-affirming surgery.
Cohort D: Female patients with the same diagnosis but without surgery. [Cohorts C and D each had 3358 patients.]
A and B are the experimental and control groups for men, as are C and D for women. Further, within each comparison patients were matched for sex, race, and age to provide further controls. And here are the kinds of surgeries they had:
To be included, all patients had to be 18 years or older with a diagnosis of gender dysphoria, as identified by the ICD-10 code F64. This criterion was chosen based on literature highlighting elevated mental health concerns for transgender and nonbinary patients with gender dysphoria [15, 16]. Gender-affirming surgery cohorts consisted of patients with a documented diagnosis of gender dysphoria who had undergone specific gender-affirming surgical procedures. For transmen, this primarily included mastectomy (chest masculinization surgery, CPT codes 19 303 and 19 304), while for transwomen, this encompassed a range of feminizing procedures such as tracheal shave (CPT code 31899), breast augmentation (CPT code 19325), and vaginoplasty (CPT codes 57 335 and 55 970). These surgeries were identified using clinician-verified CPT codes within the TriNetX database, allowing for precise classification.
Note that there were a lot more “bottom” surgeries for trans-identifying men (as the authors call them, “transwomen”) than for trans-identifying women (“transmen”). Men prefer to change their genitals more often than women, even though, if you know how vaginoplasties are done, you have to be hellbent on getting one. (I don’t know as much about the results of getting a confected penis.)
I’ll be brief with the results: in both comparisons, those patients who had surgery had a significantly higher postsurgical risk of depression, anxiety, suicidal ideation, and substance abuse. But surgery had no effect on body dysmorphia: the obsession with flaws in one’s appearance. Here are the tables and statistical comparisons of cohorts A vs. B and C vs. D, and the effect of surgery is substantial (results on women are similar though differences are smaller). Some of the differences are substantial: anxiety in men, for example, was nearly five times higher in those who had surgery than those who did not.
As you see, there are significant differences for everything save body dysmorphia, for which there are no differences at all. The authors conclude that yes, at least over the two-year measurement period (again, mental states were monitored by professionals, and were not due to self report). Given that surgery does seem to improve well being over the long term, as the authors note twice, they conclude that the results provide more caution about taking care of patients who have transitional surgery:
The findings of this study underscore a pressing need for enhanced mental health guidelines tailored to the needs of transgender individuals following gender-affirming surgery. Our analysis reveals a significantly elevated risk of mental health disorders—including depression, anxiety, suicidal ideation, and substance use disorder—post-surgery among individuals with a prior diagnosis of gender dysphoria. Importantly, however, our results indicate no increased risk of body dysmorphic disorder following surgery, suggesting that these individuals generally experience satisfaction with their body image and surgical outcomes. Notably, the heightened risk of mental health issues post-surgery was particularly pronounced among individuals undergoing feminizing transition compared to masculinizing transition, emphasizing the necessity for gender-sensitive approaches even after gender-affirming procedures.
Possible problems. There are two main limitations of the study noted by the authors. First, individuals electing surgery may have higher levels of distress to begin with than those who didn’t, so the elevated rate of mental disorders in the surgery group could be artifactual in that way. Second, patients who have had surgery may be wealthier or otherwise have more access to healthcare than those who didn’t, and so higher rates of mental distress could result simply from a difference in detectability.
Now I don’t know the literature on long-term effects of surgery on well-being, so I’ll accept the authors’ statement that they are positive, even though patients with greater well being could, I suppose, still suffer more depression and anxiety. But those who are looking to say that there should be no surgery for those with gender dysphoria will not find support for that in this paper. What they will find is the conclusion that gender-altering surgery comes with mental health risks, and those must be taken into account. It’s always better, when dealing with such stuff. to have more rather than less information so one can inform those contemplating surgery.
When I was writing Faith Versus Fact, I sometimes visited professors in our Divinity School, located right across the Quad. I discovered that the faculty was divided neatly into two parts. There were the Biblical scholars, who addressed themselves wholly to figuring out how the Bible was made, the chronology of its writing, comparisons of different religions, and so on. Their questions were basically historical and sociological, and I found that, as far as I could tell, most of this group were atheists.
Then there were the real theologians: the believers who engaged in prizing truth out of the Bible, and taking for granted that yes, there was a god and somehow the Bible had something to tell us about him. These I had little use for. Indeed, if you look up “theology” in the Oxford English Dictionary, you find this as the relevant definition. It describes the second class of academics who inhabit the Div School—the ones who accept that there is a god:
After writing my book, and having to plow through volume after volume of theology, including theological luminaries like Langdon Gilkey, Martin Marty, Alvin Plantinga, William Lane Craig, John Polkinghorne, Edward Feser, C. S. Lewis (cough) and Karen Armstrong, I finished my two years’ of reading realizing that I had learned nothing about the “nature and attributes of God and His relations with man and the universe.” That, of course, is because there is no evidence for god, and the Bible, insofar as it treats of things divine, is fictional. Yes, there is anthropology in the Bible, as Richard Dawkins notes below, but it tells us absolutely nothing about god, his plan, or how he works. If you don’t believe me, consult the theologians of other faiths: Hindus, Muslims, and yes, Scientologists. They find a whole different set of “truths”! There is no empirical truth that adds to what humanists have found (as Dawkins notes below “moral truths” are not empirical truths), but only assertions that can’t be tested. (Well, a few facts are correct, but many, like the Exodus of the Jews from Egypt and the census that drove the Jesus Family to Bethlehem, are flatly wrong.)
The discipline of theology as described by the OED is a scam, and I’m amazed that people get paid to do it. The atheist Thomas Jefferson (perhaps he was a deist) realized this, and, when he founded the University of Virginia, prohibited any religious instruction. But pressure grew over the centuries, and I see that U. VA. now has a Department of Religious Studies, founded in 1967. So much the worse for them.
In the end, the only value I see in theology comprises the anthropological, sociological, and psychological aspects: what can we discern about what people thought and how they behaved in the past, and how the book was cobbled together. I see no value in its exegesis of God’s ways and thoughts.
And so I agree with what Richard says in the video below. Here he discusses the “value” of theology, but the only value he sees is as “form of anthropology. . . the only form of theology that is a subject is historical scholarship, literary scholarship. . . that kind of thing.” (“Clip taken from the Cosmic Skeptic Podcast #10.”)
I just wrote a piece for another venue that partly involves theology (stay tuned), and once again I was struck by the intellectual vacuity and weaselly nature of traditional theologians. And so I ask readers a question:
What is the value of theology? Has its endless delving into the nature of God and his ways yielded anything of value?
And I still don’t think that divinity schools are of any value, even though we have one at Chicago. After all, concerning their concentration on Christianity and Judaism, they are entire schools devoted to a single work of fiction. Granted, it’s an influential work of fiction, and deserves extra attention for that, but trying to pry truth out if it. . . well, it’s wasted effort and money.
I asked this question five years ago, noting that Dan Barker defined theology as “a subject without an object.”
A few kindly readers, such as ecologist Susan Harrison of UC Davis, have sent in photos, so the feature is not yet moribund. Susan’s narrative and IDs are indented, and you can enlarge the owl photos by clicking on them.
A winter visit to the owls of Bob Dylan Country
Many North American owls are not regularly migratory like songbirds, but will shift many miles to the north or south depending on yearly weather conditions and prey availability. Once every five or more years, the northernmost Midwest receives a winter influx of Boreal Owls (Aegolius funereus). The arrival of this handsome little raptor is so exciting that some birders will travel from as far away as (say) California for a weekend to see it.
Having heard about the Boreal Owls in January, I reached out to a local guide and arranged a late February trip to Two Harbors, Minnesota on the north shore of Lake Superior. On our first day it seemed I might have waited too long. The weather had warmed and no owls had been reported for a few days. We spent 10 fruitless hours cruising the roads and staring obsessively into the willows, alders, and small spruce along the verges. Had the owls moved back north?
Our second day dawned as clear and cold as a proper Minnesota winter morning. Not half an hour into our renewed search, a teardrop-shaped gray bundle stared back at us from the roadside shrubbery. With a nod to Bob Dylan, “Highway 61 Revisited” describes exactly how we found this owl!
Our first Boreal Owl:
Later that day we saw another one at Sax-Zim Bog, a famous destination for seeking overwintering owls of multiple species.
Our second Boreal Owl:
We were greatly helped by the close-knit network of regional owlers who share sightings with one another over an app. They guard information closely to spare owls from excessive attention.
Owlers at our second Boreal Owl sighting:
Having achieved success with the elusive Boreal Owl, we cruised around Sax-Zim Bog looking for the magnificent and more regularly occurring Great Gray Owl (Strix nebulosa). These are similar to Boreal Owls in being boreal forest inhabitants, nonmigratory, and shifting farther south in some years. We found a very sleepy owl perched along a roadside.
Great Gray Owl:
Finally we looked for Snowy Owls (Bubo scandiacus), which unlike the other two, undergo a regular winter migration to this area from their breeding grounds in the high Arctic. In most years they reach only the northern tier of US states, but they wander much farther south every now and then. They seem to be highly adaptable; one reliable place to see them, in fact, is the industrial district of Superior, Wisconsin. I think Bob Dylan would approve of their taste in gritty, down-to-earth surroundings.
Snowy Owl:
As part of my post last week about measurement and measurement devices, I provided a very simple example of a measuring device. It consists of a ball sitting in a dip on a hill (Fig. 1a), or, as a microscopic version of the same, a microsopic ball, made out of only a small number of atoms, in a magnetic trap (Fig. 1b). Either object, if struck hard by an incoming projectile, can escape and never return, and so the absence of the ball from the dip (or trap) serves to confirm that a projectile has come by. The measurement is crude — it only tells us whether there was a projectile or not — but it is reasonably definitive.
Fig. 1a: A ball in a dimple on the side of the hill will be easily and permanently removed from its perch if struck by a passing object. Fig. 1b: Similarly to Fig. 1a, a microscopic ball in a trap made from electric and/or magnetic field may easily escape the trap if struck.In fact, we could learn more about the projectile with a bit more work. If we measured the ball’s position and speed (approximately, to the degree allowed by the quantum uncertainty principle), we would get an estimate of the energy carried by the projectile and the time when the collision occurred. But how definitive would these measurements be?
With a macroscopic ball, we’d be pretty safe in drawing conclusions. However, if the objects being measured and the measurement device are ultra-microscopic — something approaching atomic size or even smaller — then the measurement evidence is fragile. Our efforts to learn something from the microscopic ball will be in vain if the ball suffers additional collisions before we get to study it. Indeed, if a tiny ball interacts with any other object, microscopic or macroscopic, there is a risk that the detailed information about its collision with the projectile will be lost, long before we are able to obtain it.
Amplify QuicklyThe best way to keep this from happening is to quickly translate the information from the collision, as captured in the microscopic ball’s behavior, into some kind of macroscopic effect. Once the information is stored macroscopically, it is far harder to erase.
For instance, while a large meteor striking the Earth might leave a pond-sized crater, a subatomic particle striking a metal table might leave a hole only an atom wide. It doesn’t take much to fill in an atom-sized hole in the blink of an eye, but a crater that you could swim in isn’t going to disappear overnight. So if we want to know about the subatomic particle’s arrival, it would be good if we could quickly cause the hole to grow much larger.
This is why almost all microscopic measurements include a step of amplification — the conversion of a microscopic effect into a macroscopic one. Finding new, clever and precise ways of doing this is part of the creativity and artistry of experimental physicists who study atoms, atomic nuclei, or elementary particles.
There are various methods of amplification, but most methods can be thought of, in a sort of cartoon view, as a chain of ever more stable measurements, such as this:
A classic and simple device that uses amplification is a Geiger counter (or Geiger-Müller counter). (Hans Geiger, while a postdoctoral researcher for Ernest Rutherford, performed a key set of experiments that Rutherford eventually interpreted as evidence that atoms have tiny nuclei.) This counter, like our microscopic ball in Fig. 1b, simply records the arrival of high-energy subatomic projectiles. It does so by turning the passage of a single ultra-microscopic object into a measurable electric current. (Often it is designed to make a concurrent audible electronic “click” for ease of use.)
How does this device turn a single particle, with a lot of energy relative to a typical atomic energy level but very little relative to human activity, into something powerful enough to create a substantial, measurable electric current? The trick is to use the electric field to create a chain reaction.
The Electric FieldThe electric field is present throughout the universe (like all cosmic fields). But usually, between the molecules of air or out in deep space, it is zero or quite small. However, when it is strong, as when you have just taken off a wool hat in winter, or just before a lightning strike, it can make your hair stand on end.
More generally, a strong electric field exerts a powerful pull on electrically charged objects, such as electrons or atomic nuclei. Positively charged objects will accelerate in one direction, while negatively charged objects will accelerate in the other. That means that a strong electric field will
Meanwhile electrically neutral objects are largely left alone.
The StrategySo here’s the strategy behind the Geiger-Müller counter. Start with a gas of atoms, sitting inside of a closed tube in a region with a strong electric field. Atoms are electrically neutral, so they aren’t much affected by the electric field.
But the atoms will serve as our initial measurement devices. If a high-energy subatomic particle comes flying through the gas, it will strike some of the gas atoms and “ionize” them — that is, it will strip an electron off the atom. In doing so it breaks the electrically neutral atom into a negatively charged electron and a positively charged leftover, called an “ion.”
If it weren’t for the strong electric field, the story would remain microscopic; the relatively few ions and electrons would quickly find their way back together, and all evidence of the atomic-scale measurements would be lost. But instead, the powerful electric field causes the ions to move in one direction and the electrons to move in the opposite direction, so that they cannot simply rejoin each other. Not only that, the field causes these subatomic objects to speed up as they separate.
This is especially significant for the electrons, which pick up so much speed that they are able to ionize even more atoms — our secondary measurement devices. Now the number of electrons freed from their atoms has become much larger.
The effect is an chain reaction, with more and more electrons stripped off their atoms, accelerated by the electric field to high speed, allowing them in their turn to ionize yet more atoms. The resulting cascade, or “avalanche,” is called a Townsend discharge; it was discovered in the late 1890s. In a tiny fraction of a second, the small number of electrons liberated by the passage of a single subatomic particle has been multiplied exceedingly, and a crowd of electrons now moves through the gas.
The chain reaction continues until this electron mob arrives at a wire in the center of the counter — the final measurement device in the long chain from microscopic to macroscopic. The inflow of a huge number of the electrons onto the wire, combined with the flow of the ions onto the wall of the device, causes an electrical current to flow. Thanks to the amplification, this current is large enough to be easily detected, and in response a separate signal is sent to the device’s sound speaker, causing it to make a “click!”
Broader LessonsIt’s worth noting that the strategy behind the Geiger-Müller counter requires an input of energy from outside the device, supplied by a battery or the electrical grid. When you think about it, this is not surprising. After the initial step there are rather few moving electrons, and their total motion-energy is still rather low; but by the end of the avalanche, the motion-energy of the tremendous number of moving electrons is far greater. Since energy is conserved, that energy has to have come from somewhere.
Said another way, to keep the electric field strong amid all these charged particles, which would tend to cancel the field out, requires the maintenance of high voltage between the outer wall and inner wire of the counter. Doing so requires a powerful source of energy.
Without this added energy and the resulting amplification, the current from the few initially ionized atoms would be extremely small, and the information about the passing high-energy particle could easily be lost due to ordinary microscopic processes. But the chain reaction’s amplification of the number of electrons and their total amount of energy dramatically increases the current and reduces the risk of losing the information.
Many devices, such as the photomultiplier tube for the detection of photons [particles of light], are like the Geiger-Müller counter in using an external source of energy to boost a microscopic effect. Other devices (like the cloud chamber) use natural forms of amplification that can occur in unstable systems. (The basic principle is similar to what happens with unstable snow on a steep slope: as any off-piste skier will warn you, under the correct circumstances a minor disturbance can cause a mountain-wide snow avalanche.) If these issues interest you, I suggest you read more about the various detectors and subdetectors at ongoing particle experiments, such as those at the Large Hadron Collider.
Amplification in a Simplified SettingI’ve described the Geiger-Müller counter without any explicit reference to quantum physics. Is there any hope that we could understand how this process really takes place using quantum language, complete with a wave function?
Not in practice: the chain reaction is far, far too complicated. A quantum system’s wave function does not exist in the physical space we live in; it exists in the space of possibilities. Amplification involving hordes of electrons and ions forces us to consider a gigantic space of possibilities; for instance, a million particles moving in our familiar three spatial dimensions would correspond to a space of possibilities that has three million dimensions. Neither you nor I nor the world’s most expert mathematical physicist can visualize that.
Nevertheless, we can gain intuition about the basic idea behind this device by simplifying the chain reaction into a minimal form, one that involves just three objects moving in one dimension, and three stages:
You can think of these as the first steps of a chain reaction.
So let’s explore this simplified idea. As I often do, I’ll start with a pre-quantum viewpoint, and use that to understand what is happening in a corresponding quantum wave function.
The Pre-Quantum ViewThe pre-quantum viewpoint differs from that in my last post (which you should read if you haven’t already) in that we have two steps in the measurement rather than just one:
The projectile, microball and macroball will be colored purple, blue and orange, and their positions along the x-axis of physical space will be referred to as x1, x2 and x3. Our space of possibilities then is a three-dimensional space consisting of all possible values of x1, x2 and x3.
The two-step measurement process really involves four stages:
The view of this process in physical space is shown on the left side of Fig 2. Notice the acceleration of the microball between the two collisions.
Figure 2: (Left) In physical space, the projectile travels to the right and strikes the stationary microball, causing the latter to move; the microball is then accelerated to high speed and strikes the macroball, which recoils in response. The information from the initial collision has been transferred to the more stable macroball. (Right) The same process seen in the space of possibilities; note the labels on the axes. The system is marked by a red dot, with a gray trail showing its history. Note the two collisions and the acceleration between them. At the end, the system’s x3 is increasing, reflecting the macroball’s motionOn the right side of Fig. 2, the motion of the three-object system within the space of possibilities is shown by the moving red dot. To make it easier to see how the red dot moves acrossthe space of possibilities, I’ve plotted its trail across that space as a gray line. Notice there are two collisions, the first one when the projectile and microball collide (x1=x2) and the second where the two balls collide (x2=x3), resulting in two sudden changes in the motion of the dot. Notice also the rapid acceleration between the first collision and the second, as the microball gains sufficient energy to give the macroball appreciable speed.
The Quantum ViewIn quantum physics, the idea is the same, where the dot representing the system’s value of (x1, x2, x3) is replaced by the peak of a spread-out wave function. It’s difficult to plot a wave function in three dimensions, but I can at least mark out the region where its absolute value is large — where the probability to find the system is highest. I’ve sketched this in Fig. 3. Not surprisingly if follows the same path as the system in Fig. 2.
Figure 3: Sketch of the wave function for this system (compare to Fig. 2a), showing only the location of the highest peak of the wave function (the region where we are most likely to find the system.)In the pre-quantum case of Fig. 2, the red dot asserts certainty; if we were to measure x1, x2 and/or x3, we would find exactly the values of these quantities corresponding to the location of the dot. In quantum physics of Fig. 3, the peak of the wave function asserts high probability but not certainty. The wave function is spread out; we don’t know exactly what we would find if we directly measured x1, x2 and x3 at any particular moment.
Still, the path of the wave function’s peak is very similar to the path of the red dot, as was also true in the previous post. Generally, in the examples we’ve looked at so far, we haven’t shown much difference between the pre-quantum viewpoint and the quantum viewpoint. You might even be wondering if they’re more similar than people say. But there can be big differences, as we will see very soon.
The Wider ViewIf I could draw something with more than three dimensions, we could add another stage to our microball and macroball; we could accelerate the macroball and cause it to collide with something even larger, perhaps visible to the naked eye. Or instead of one macroball, we could amplify and transfer the microball’s energy to ten microballs, which in turn could have their energy amplified and transferred to a hundred microballs… and then we would have something akin to a Townsend discharge avalanche and a Geiger-Müller counter. Both in pre-quantum and in quantum physics, this would be impossible to draw; the space of possibilities is far too large. Nevertheless, the simple example in Figs. 2 and 3 provides some intuition for how a longer chain of amplification would work. It shows the basic steps needed to turn a fragile microscopic measurement into a robust macroscopic one, suitable for human scientific research or for our sense perceptions in daily living.
In the articles that will follow, I will generally assume (unless specified otherwise) that each microscopic measurement that I describe is followed by this kind of amplification and conversion to something macroscopic. I won’t be able to draw it, but as we can see in this example, the fundamental underlying idea isn’t that hard to understand.
Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do things like cleave the DNA at the targeted location. We are really just at the beginning of exploring the potential of this new system, in both research and therapeutics.
Well – we may already have something better than CSRISP: TIGR-Tas. This is also an RNA-based system for targeting specific sequences of DNA and delivering a TIGR associated protein to perform a specific function. TIGR (Tandem Interspaced Guide RNA) may have some useful advantages of CRISPR.
As presented in a new paper, TIGR is actually a family of gene editing systems. It was discovered not by happy accident, but by specifically looking for it. As the paper details: “through iterative structural and sequence homology-based mining starting with a guide RNA-interaction domain of Cas9”. This means they started with Cas9 and then trolled through the vast database of phage and parasitic bacteria for similar sequences. They found what they were looking for – another family of RNA-guided gene editing systems.
Like CRISPR, TIGR is an RNA guided system, and has a modular structure. Different Tas proteins can be coupled with the TIGR to perform different actions at the targeted site. But there are several potential advantages for TIGR over CRISPR. Like CRISPR it is RNA guided, but TIGR uses both strands of the DNA to find its target sequence. This “dual guided” approach may lead to fewer off-target errors. While CRISPR works very well, there is a trade-off in CRISPR systems between speed and precision. The faster it works, the greater the number of off-target actions – like cleaving the DNA in the wrong place. The hope is that TIGR will make fewer off-target mistakes because of better targeting.
TIGR also has “PAM-Independent targeting”. What does that mean? PAM stands for protospacer adjacent motifs – these are short DNA sequences, about 6 base pairs, that exist next to the sequence that his being targeted by CRISPR. The Cas9 protease will not function without the PAM. It appears to have evolved so that the bacteria using CRISPR as an adaptive immune system can tell self from non-self, as invading bacteria or viruses will have the PAM sequences, but the native DNA will not. The end result is that CRISPR needs PAM sequences in order to function, but the TIGR system does not. This makes the TIGR system much more versatile.
I saved what is potentially the best advantage for last – Tas proteins are much smaller than Cas proteins, about a quarter of the size. At first this might not seem like a huge advantage, but for some applications it is. One of the main limiting factors for using CRISPR therapeutically is getting the CRISPR-Cas complex into human cells. There are several available approaches – physical methods like direct injection, chemical methods, and viral vectors. Specific methods, however, generally have a size limit on the package they can deliver into a cell. Adeno-associated vectors (AAVs) for example have lots of advantaged but only can deliver relatively small payloads. Having a much more compact gene-editing system, therefore, is a huge potential advantage.
When it comes to therapeutics, the delivery system is perhaps the greater limiting factor than the gene targeting and editing system itself. There are currently two FDA indications for CRISPR-based therapies, both for blood disorders (sickle cell and thalassemia). For these disorders bone marrow can be removed from the patient, CRISPR is then applied to make the desired genetic changes, and then the bone marrow is transplanted back into the patient. In essence, we bring the cells to the CRISPR rather than the CRISPR to the cells. But how do we deliver CRISPR to a cell population within a living adult human?
We use the methods I listed above, such as the AAVs, but these all have limitations. Having a smaller package to deliver, however, will greatly expand our options.
The world of genetic engineering is moving incredibly fast. We are taking advantage of the fact that nature has already tinkered with these systems for hundreds of millions of years. There are likely more systems and variations out there for us to find. But already we have powerful tools to make precise edits of DNA at targeted locations, and TIGR just adds to our toolkit.
The post The New TIGR-Tas Gene Editing System first appeared on NeuroLogica Blog.