Connectivity and dysconnectivity: A brief history of functional connectivity research in schizophrenia and future directions

Eva Mennigen , ... Vince D. Calhoun , in Connectomics, 2019

Technical foundationsStatic functional network connectivity

ICA is a data-driven technique that uses higher order statistics to separate signals from multiple hidden sources and is viewed as a special case of blind source separation. The complexity of neuroimaging data, with multiple thousands of voxels and associated time course, is well-suited to ICA ( Beckmann and Smith, 2005; Calhoun et al., 2001a,b; McKeown et al., 1998). Independence with respect to resting-state fMRI data commonly refers to the spatial dimension, and the goal is to separate brain regions with coupled time courses (i.e., independent components [Calhoun et al., 2001a,b; McKeown et al., 1998]). Compared with the approaches discussed earlier in this chapter, the preprocessing of resting-state fMRI data for ICA is less complex. This is due to the fact that, by separating data into independent components, ICA, also identifies sources of noise. Motion removal and bandpass filtering is typically applied after ICA to preserve more variance in the data to facilitate the ICA algorithm. With regard to resting-state fMRI connectivity analysis, ICA is often performed on concatenated data from all subjects in the sample (after data reduction steps involving principle component analysis), which is called group ICA (GICA [Calhoun et al., 2001a,b; Calhoun and Adali, 2012; Correa et al., 2007]). The output of such a spatial GICA comprises the group's mean spatial maps and according time courses for all independent components. Spatial maps and time courses are used to back-reconstruct a single subject's individual spatial maps and time courses. Multiple studies have shown the reliability of the GICA approach and that it can preserve individual differences (Allen et al., 2012b; Biswal et al., 2010; Calhoun and Adali, 2012; Du and Fan, 2013; Erhardt et al., 2011). The model order of the group ICA approach reflects in how many independent components or sources the data is separated into and has a large influence on the ICA decomposition when under- or overestimated (Allen et al., 2012b). Many studies these days use a higher model order (i.e., 100 independent components). Independent components identified as conveying nonartifactual information—ICNs—are further used to calculate the covariance of all ICNs among each other yielding a static functional network connectivity (FNC) matrix. FNC in contrast to FC is defined as the temporal covariation across ICNs (Jafri et al., 2008). The most commonly used metric to calculate FNC is the Pearson correlation coefficient. Investigations of group differences can be performed on correlation values between ICNs, activation differences in spatial maps, or frequency fluctuations in time courses (Zou et al., 2008). Steps involved in group ICA, and static and dynamic FNC are summarized in Fig. 3.

Fig. 3

Fig. 3. Step-by-step schematic of group independent component analysis of functional network connectivity (FNC); first, group independent component analysis decomposes the data, yielding spatial maps with associated time courses; for static FNC, a covariance matrix is created based on nonartifactual components; for dynamic FNC, a sliding-temporal window splits time courses in smaller chunks clustered around recurring whole-brain connectivity patterns.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128138380000078

Custom Filters

Steven W. Smith , in Digital Signal Processing: A Practical Guide for Engineers and Scientists, 2003

Optimal Filters

Figure 17-7a illustrates a common filtering problem: trying to extract a waveform (in this example, an exponential pulse) buried in random noise. As shown in (b), this problem is no easier in the frequency domain. The signal has a spectrum composed mainly of low-frequency components. In comparison, the spectrum of the noise is white (the same amplitude at all frequencies). Since the spectra of the signal and noise overlap, it is not clear how the two can best be separated. In fact, the real question is how to define what "best" means. We will look at three filters, each of which is "best" (optimal) in a different way. Figure 17-8 shows the filter kernel and frequency response for each of these filters. Figure 17-9 shows the result of using these filters on the example waveform of Fig. 17-7a.

FIGURE 17-7. Example of optimal filtering. In (a), an exponential pulse buried in random noise. The frequency spectra of the pulse and noise are shown in (b). Since the signal and noise overlap in both the time and frequency domains, the best way to separate them isn't obvious.

FIGURE 17-8. Example of optimal filters. In (a), three filter kernels are shown, each of which is optimal in some sense. The corresponding frequency responses are shown in (b). The moving average filter is designed to have a rectangular pulse for a filter kernel. In comparison, the filter kernel of the matched filter looks like the signal being detected. The Wiener filter is designed in the frequency domain, based on the relative amounts of signal and noise present at each frequency.

FIGURE 17-9. Example of using three optimal filters. These signals result from filtering the waveform in Fig. 17-7 with the filters in Fig. 17-8. Each of these three filters is optimal in some sense. In (a), the moving average filter results in the sharpest edge response for a given level of random noise reduction. In (b), the matched filter produces a peak that is farther above the residue noise than provided by any other filter. In (c), the Wiener filter optimizes the signal-to-noise ratio.

The moving average filter is the topic of Chapter 15. As you recall, each output point produced by the moving average filter is the average of a certain number of points from the input signal. This makes the filter kernel a rectangular pulse with an amplitude equal to the reciprocal of the number of points in the average. The moving average filter is optimal in the sense that it provides the fastest step response for a given noise reduction.

The matched filter was previously discussed in Chapter 7. As shown in Fig. 17-8a, the filter kernel of the matched filter is the same as the target signal being detected, except it has been flipped left-for-right. The idea behind the matched filter is correlation, and this flip is required to perform correlation using convolution. The amplitude of each point in the output signal is a measure of how well the filter kernel matches the corresponding section of the input signal. Recall that the output of a matched filter does not necessarily look like the signal being detected. This doesn't really matter; if a matched filter is used, the shape of the target signal must already be known. The matched filter is optimal in the sense that the top of the peak is farther above the noise than can be achieved with any other linear filter (see Fig. 17-9b).

The Wiener filter (named after the optimal estimation theory of Norbert Wiener) separates signals based on their frequency spectra. As shown in Fig. 17-7b, at some frequencies there is mostly signal, while at others there is mostly noise. It seems logical that the "mostly signal" frequencies should be passed through the filter, while the "mostly noise" frequencies should be blocked. The Wiener filter takes this idea a step further; the gain of the filter at each frequency is determined by the relative amount of signal and noise at that frequency:

EQUATION 17-1

The Wiener filter. The frequency response, represented by H[f], is determined by the frequency spectra of the noise, N[f], and the signal, S[f]. Only the magnitudes are important; all of the phases are zero.

H [ f ] = S [ f ] 2 S [ f ] 2 + N [ f ] 2

This relation is used to convert the spectra in Fig. 17-7b into the Wiener filter's frequency response in Fig. 17-8b. The Wiener filter is optimal in the sense that it maximizes the ratio of the signal power to the noise power (over the length of the signal, not at each individual point). An appropriate filter kernel is designed from the Wiener frequency response using the custom method.

While the ideas behind these optimal filters are mathematically elegant, they often fail in practicality. This isn't to say they should never be used. The point is, don't hear the word "optimal" and stop thinking. Let's look at several reasons why you might not want to use them.

First, the difference between the signals in Fig. 17-9 is very unimpressive. In fact, if you weren't told what parameters were being optimized, you probably couldn't tell by looking at the signals. This is usually the case for problems involving overlapping frequency spectra. The small amount of extra performance obtained from an optimal filter may not be worth the the increased program complexity, the extra design effort, or the longer execution time.

Second: The Wiener and matched filters are completely determined by the characteristics of the problem. Other filters, such as the windowed-sinc and moving average, can be tailored to your liking. Optimal filter advocates would claim that this diddling can only reduce the effectiveness of the filter. This is very arguable. Remember, each of these filters is optimal in one specific way (i.e., "in some sense"). This is seldom sufficient to claim that the entire problem has been optimized, especially if the resulting signals are interpreted by a human observer. For instance, a biomedical engineer might use a Wiener filter to maximize the signal-to-noise ratio in an electro-cardiogram. However, it is not obvious that this also optimizes a physician's ability to detect irregular heart activity by looking at the signal.

Third: The Wiener and matched filter must be carried out by convolution, making them extremely slow to execute. Even with the speed improvements discussed in the next chapter (FFT convolution), the computation time can be excessively long. In comparison, recursive filters (such as the moving average or others presented in Chapter 19) are much faster, and may provide an acceptable level of performance.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750674447500546

Hypothalamus and Pituitary Gland

Joseph Feher , in Quantitative Human Physiology (Second Edition), 2017

Summary

The pituitary gland consists of two major components: the adenohypophysis and the neurohypophysis. Part of the adenohypophysis is the pars distalis, also referred to as the anterior lobe. The infundibular process of the neurohypophysis forms the posterior pituitary. The gland sits below the hypothalamus and is connected to it by the hypophyseal stalk. This stalk contains neuronal processes and blood vessels that collect neurotransmitters released by nerve cells in the hypothalamus. In this way, the endocrine system is connected to and controlled by the nervous system.

The hormones released by the posterior pituitary include ADH and oxytocin. These are synthesized by cells in the paraventricular and supraoptic nucleus of the hypothalamus and are transported down into the posterior pituitary by axoplasmic transport, bound within vesicles to neurophysin. Stimulation of the cells in the hypothalamus causes fusion of the vesicles and release of the hormones into the blood. Both ADH and oxytocin contain nine amino acids and show structural similarity. ADH release is increased by increased plasma osmolarity and decreased blood volume. The actions of ADH are twofold: it increases the water and urea permeability of the distal nephron and thereby causes the kidneys to excrete a low volume of highly concentrated urine. This effect on water and urea permeability is due to cAMP and PKA-mediated phosphorylation of aquaporin channels which are then inserted into the apical membrane of kidney tubule cells. The second action of ADH is vasoconstriction, from which ADH derives its other name: vasopressin.

Stimulation of oxytocin is provided by stretch of the uterus and suckling or associated psychosocial cues. The hormone causes uterine contraction and is essential in parturition. It also causes the milk "let-down" reflex or the milk ejection reflex: it stimulates contraction of myoepithelial cells in the breast and milk secretion by the breast.

The anterior pituitary releases a number of "master hormones" including TSH, LH, FSH, PRL, GH, and ACTH. These are released from specific cells in the anterior pituitary in response to releasing factors produced by neurons in the hypothalamus. These neurons package the releasing factors in neurotransmitter vesicles that dump their contents into the hypophyseal portal circulation that carries the factors to the anterior pituitary without dilution.

GH is synthesized and released by specific cells called somatotrophs in the anterior pituitary. These cells integrate at least five separate signals: GHRH, SST, ghrelin, IGF-1, and GH itself. GHRH is released from cells in the arcuate nucleus of the hypothalamus and stimulates GH release by somatotrophs by a G s mechanism. SST is released from cells in the periventricular nucleus and inhibits GH release by a Gi mechanism. Ghrelin is released from the stomach during fasting and stimulates somatotrophs directly and also stimulates release of GHRH by hypothalamic cells. GH negatively feeds back onto somatotrophs to reduce GH secretion. The hypothalamic cells respond to a variety of stimuli. The final GH release is episodic and pulsatile.

Clinical Applications: GH Excess or Deficiency

Excess GH secretion from childhood is gigantism and results in abnormally tall persons. Probably the documented record for the tallest human being was Robert Wadlow (February 22, 1918–July 15, 1940) who reached the height of 8′11″ (2.72   m) but had not stopped growing at the time of his death at the age of 22. The average height in the United States for men is 5′10″ or 1.72   m. His weight at the time of death was 485   lb (220   kg) with size 37AA shoes. Wadlow is sometimes known as the Alton Giant, named for his home town of Alton, Illinois (see Figure 9.2.15).

Figure 9.2.15. Robert Wadlow compared to his father, Franklin Wadlow, at 5′11″.

Excess GH has its consequences. Wadlow suffered from muscle weakness and tendency toward infections. He required braces to walk, and one of these irritated his ankle, causing a blister and subsequent infection. This probably progressed to septicemia and he died in his sleep at age 22.

Excess GH secretion in the adult causes acromegaly, first described by Pierre Marie in 1886 as disordered somatic growth and proportion. Pituitary adenomas cause 95% of the cases of acromegaly. The adenomas typically are slow-growing tumors that appear in the third to fifth decade of life. The symptoms of acromegaly include glucose intolerance, widening of bones leading to coarser facial features and enlarged hands and feet, enlarged heart, liver, and kidneys, and thickened skin and enlarged muscles.

GH deficiency in childhood produces short adults with a general tendency toward obesity. Any malfunction in the cascade from GHRH secretion to target cell responsiveness could account for dwarfism. These people have delayed skeletal growth and sexual maturation, but they are otherwise healthy with normal mental capacity. However, persons with GH deficiency have reduced life expectancy due to cardiovascular and cerebrovascular diseases. Because the window of opportunity closes early in life, diagnosis of GH deficiency must be made early. Children 3 standard deviations (SD) below average or with growth deceleration (2   SD below average for 1 year), or with combinations of these, should be evaluated for the cause of poor growth. A variety of conditions can cause secondary growth disorders: malnutrition, chronic diseases such as malabsorption and GI diseases, chronic liver disease, cardiovascular disease, or renal disease. Primary causes of GH deficiency lie in pituitary or hypothalamic dysfunction, and IGF deficiency due to GH insensitivity.

Probably the most famous dwarf was Charles Stratton (January 4, 1838–July 15, 1883) who achieved fame through association with P.T. Barnum. He was born in Bridgeport, CT, weighing 4.3   kg at birth. He stopped growing at 6 months of age and 25″ (64   cm). At 9 years of age he began to grow again, reaching 82.6   cm at age 18. He toured the United States and Europe as an entertainer, with the stage name General Tom Thumb, earning a fortune. He died in 1883 from a stroke at the age of 45.

GH has multiple effects on multiple systems. Excess produces gigantism in youth and acromegaly in adults. Deficits in childhood cause dwarfism. It increases the growth of long bones and increases the uptake of amino acids, increases blood glucose and mobilizes glycogen, and increases lipolysis. Linear growth stops upon closure of the epiphyseal growth plate in the long bones. Effect of GH on the growth plates is inhibited by fasting through elevation of fibroblast growth factor 21 (FGF21) and by high levels of estradiol, probably mediated by increased expression of suppressor of cytokine signaling 2 (SOCS2).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128008836000859

Data Communications in Distributed Control System

Peng Zhang , in Industrial Control Technology, 2008

6.2.3.6 Multiplexing Mode

Multiplexing is sending multiple signals or streams of information on a carrier at the same time in the form of a single, complex signal and then recovering the separate signals at the receiving end. In analog transmission, signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the carrier bandwidth is divided into subchannels of different frequency widths, each carrying a signal at the same time in parallel. In digital transmission, signals are commonly multiplexed using time-division multiplexing (TDM), in which the multiple signals are carried over the same channel in alternating time slots. In some optical fiber networks, multiple signals are carried together as separate wavelengths of light in a multiplexed signal using dense wavelength division multiplexing (DWDM).

(1)

Frequency-division multiplexing (FDM). FDM is a scheme in which numerous signals are combined for transmission on a single communications line or channel. Each signal is assigned a different frequency (subchannel) within the main channel.

A typical analog Internet connection through a twisted pair telephone line requires approximately three kilohertz (3   kHz) of bandwidth for accurate and reliable data transfer. Twisted-pair lines are common in households and small businesses. But major telephone cables, operating between large businesses, government agencies, and municipalities, are capable of much larger bandwidths.

Suppose a long-distance cable is available with a bandwidth allotment of three megahertz (3   MHz). This is 3000   kHz, so in theory, it is possible to place 1000 signals, each 3   kHz wide, into the long-distance channel. The circuit that does this is known as a multiplexer. It accepts the input from each individual end user, and generates a signal on a different frequency for each of the inputs. This results in a high-bandwidth, complex signal containing data from all the end users. At the other end of the longdistance cable, the individual signals are separated out by means of a circuit called a demultiplexer, and routed to the proper end users. A two-way communications circuit requires a multiplexer/demultiplexer at each end of the long-distance, high-bandwidth cable.

When FDM is used in a communications network, each input signal is sent and received at maximum speed at all times. This is its chief asset. However, if many signals must be sent along a single long-distance line, the necessary bandwidth is large, and careful engineering is required to ensure that the system will perform properly. In some systems, a different scheme, known as TDM, is used instead.

(2)

Time-division multiplexing (TDM). Time-division multiplexing (TDM) is a method of putting multiple data streams in a single signal by separating the signal into many segments, each having a very short duration. Each individual data stream is reassembled at the receiving end based on the timing.

The circuit that combines signals at the source (transmitting) end of a communications link is known as a multiplexer. It accepts the input from each individual end user, breaks each signal into segments, and assigns the segments to the composite signal in a rotating, repeating sequence. The composite signal thus contains data from multiple senders. At the other end of the long-distance cable, the individual signals are separated out by means of a circuit called a demultiplexer, and routed to the proper end users. A two-way communications circuit requires a multiplexer/demultiplexer at each end of the long-distance, high-bandwidth cable.

If many signals must be sent along a single long-distance line, careful engineering is required to ensure that the system will perform properly. An asset of TDM is its flexibility. The scheme allows for variation in the number of signals being sent along the line, and constantly adjusts the time intervals to make optimum use of the available bandwidth. The Internet is a classic example of a communications network in which the volume of traffic can change drastically from hour-to-hour. In some systems, a different scheme, known as FDM, is preferred.

(3)

Dense wavelength division multiplexing (DWDM). Dense wavelength division multiplexing (DWDM) is a technology that puts data from different sources together on an optical fiber, with each signal carried at the same time on its own separate light wavelength. Using DWDM, up to 80 (and theoretically more) separate wavelengths or channels of data can be multiplexed into a light stream transmitted on a single optical fiber. Each channel carries a time division multiplexed (TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second), up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also sometimes called wave division multiplexing (WDM).

Since each channel is demultiplexed at the end of the transmission back into the original source, different data formats being transmitted at different data rates can be transmitted together. Specifically, Internet (IP) data, Synchronous Optical Network data (SONET), and asynchronous transfer mode (ATM) data can all be traveling at the same time within the optical fiber. DWDM promises to solve the "fiber exhaust" problem and is expected to be the central technology in the all-optical networks of the future.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780815515715500074

Data transmission interfaces

Peng Zhang , in Advanced Industrial Control Technology, 2010

(4) Multiplexing transmission modes

Multiplexing is sending multiple signals or streams of information on a carrier at the same time in the form of a single, complex signal, and then recovering the separate signals at the receiving end. In analog transmission, signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the carrier bandwidth is divided into subchannels of different frequency widths, each carrying a signal at the same time in parallel. In digital transmission, signals are commonly multiplexed using time-division multiplexing (TDM), in which the multiple signals are carried over the same channel in alternating time slots. In some optical-fiber networks, multiple signals are carried together as separate wavelengths of light in a multiplexed signal using dense wavelength division multiplexing (DWDM).

Frequency-division multiplexing (FDM) is a scheme in which numerous signals are combined for transmission on a single transmission line or channel. Each signal is assigned a different frequency (subchannel) within the main channel. When FDM is used in a transmission network, each input signal is sent and received at maximum speed at all times, but if many signals must be sent along a single long-distance line, the necessary bandwidth is large, and careful engineering is required to ensure that the system will perform properly.

Time-division multiplexing (TDM) is a method of putting multiple data streams in a single signal by separating the signal into many segments, each having a very short duration. Each individual data stream is reassembled at the receiving end based on timing.

Dense wavelength division multiplexing (DWDM) is a technology that puts data from different sources together on an optical fiber, with each signal being carried at the same time, at its own separate wavelength. Using DWDM, up to 80 (and theoretically more) separate wavelengths or channels of data can be multiplexed into a light stream transmitted on a single optical fiber. Each channel carries a time division multiplexed (TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second), up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also sometimes called wave division multiplexing (WDM).

Since each channel is demultiplexed at the end of the transmission to retrieve the original source, different data formats can be transmitted together. Specifically, Internet Protocol (IP) data, Synchronous Optical Network data (SONET), and asynchronous transfer mode (ATM) data can all travel at the same time within the optical fiber. DWDM promises to solve the "fiber exhaust" problem and is expected to be the central technology in the all-optical networks of the future.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781437778076100142

Nucleic Acids Studied Using NMR

John C. Lindon , in Encyclopedia of Spectroscopy and Spectrometry, 1999

Assignment of NMR spectra

The usual approach used for proteins is also applied to NMR spectra of nucleic acids. 1D 1 H NMR spectra can be supplemented with COSY or TOCSY spectra to aid spin system connectivities. Separate signals for the exchangeable amino and imino protons can often be observed and the properties of these signals can be very indicative of nucleotide structure. For example, imino protons involved in Watson–Crick base pairing such as A with T generally resonate in a distinctive window between δ13.0 and δ14.5. The 1H chemical shifts in duplex structures can be different to those for bases in single strands and thus the unwinding of a duplex can be monitored using these shift changes.

Because base protons and sugar protons are separated by a minimum of four bonds, spin couplings are not usually observed between these units and recourse is made to the use of NOE measurements often as 2D NOESY studies. Thus, for example, NOEs observed between the sugar anomeric proton and H-6 and H-8 of a base serve to identify the base and sugar units of a single residue. NOE measurement can also be used to gain information on the sequence of residues in a nucleic acid.

In addition, heteronuclear coupling between 1H and 31P can be used to make sequential connectivities between residues. This is possible because there is a continuous relay of a series of homonuclear and heteronuclear couplings along the nucleotide backbone.

Variable-temperature studies can be very informative as they give information on the melting and denaturation of duplex structures.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122266803002143

The Tautomerism of Heterocycles: Five-membered Rings with Two or More Heteroatoms

Vladimir I. Minkin , ... Olga V. Denisko , in Advances in Heterocyclic Chemistry, 2000

b Kinetics of Proton Exchange in Solution

The relatively narrow range of chemical shifts inherent to 1 H NMR spectroscopy often presents problems in slowing the fast proton exchanges in pyrazole-associated species in solution sufficiently to be able to observe separate signals for the individual tautomers. In early studies of the prototropic rearrangement 1a1b, their rates were found to be so high that the molecule retained, in 1H NMR terms, their effective C 2v symmetry down to the lowest temperature achievable in these experiments. Utilization of HMPT as a solvent helped to freeze the proton transfer enough to observe both degenerate annular tautomers of pyrazole in the 1H NMR spectrum and to estimate the energy barrier to their interconversion, ΔG*289, as 58.6   kJ mol−1 at 0.1–0.2 M concentration of the solution (77IZV2390).

The problem has been overcome most reliably by using 13C - or 15N NMR techniques, which provide a wider range of chemical shifts and, thus, allow one to quantitatively characterize many dynamic processes which are too fast to be measured by methods of 1H NMR spectroscopy.

One of the first studies of this type was accomplished by A. Nesmeyanov et al. (75T1461, 75T1463), who recorded well-resolved 13C NMR spectra (22.635   MHz) of unsubstituted pyrazole 1 in concentrated (1–2 M) solution in an ether/tetrahydrofuran mixture. At -118°C, the spectrum consisted of three separate signals, one for each of the carbon nuclei. The energy barrier estimated at the temperature of coalescence of C-3 and C-5 signals (about –100°C) was equal to 46   kJ mol−1. A significant [although later claimed to be exaggerated (93CJC1443)] decrease in the rate of the exchange reaction was observed for N-deuterated pyrazole, a fact which was explained by the contribution of tunneling in the mechanism of prototropic tautomerism in pyrazole associates in solution (76CPL184; 77KGS781). Subsequent to the studies of the Russian authors, W. Litchman (79JA545) reported on the kinetic data available from the temperature-induced collapse of the C-3 and C-5 NMR peaks of pyrazole in DMSO-d6. It was found that the proton exchange in pyrazole is, in fact, slow on the 13C NMR time scale and that the C-3 and C-5 peaks do not coalesce until the temperature is raised to 64°C. The value of the free energy barrier, ΔG*, was determined to be as high as 61.9   kJ mol−1 which compares well with values obtained for pyrazole (ΔG*   =   63   kJ mol−1) (77JOC659) and 3,5-dimethylpyrazole (77JOC659; 85JA5290) in another dipolar solvent HMPT (ΔG*   =   63   kJ mol−1) and in the solid state (ΔG*   =   57   kJ mol−1) by 13C CPMAS NMR.

Adding small amounts of water to Et2O/THF [77ZN(C)891] [or even significant ones to DMSO (79JA545)] solutions of pyrazole caused only slight effects on the rate of proton exchange. However, the use of acetone or other solvents resulted in very rapid proton exchange manifested by an average resonance for C-3 and C-5 even at 6°C (74JOC357; 77JOC659; 79JA545). The same effect was achieved by the addition of trace amounts of an acid to pyrazole solutions. This observation led to the conclusion (79JA545) that the cases of rapid proton exchange reported for pyrazole should be attributed to acid impurities present in the samples.

Accounting for this effect, it was possible to apply dynamic 1H NMR spectroscopy to measure energy barriers to the prototropic rearrangements of pyrazoles. Temperature-variable spectra of a series of 4-substituted pyrazoles 5 and 6 have been studied in methanol-d4 solutions and the free energy barriers of the degenerate type 2a2b tautomerization reported (93CJC1443).

The values of ΔG* correspond to the exchange reaction in N-deuterated pyrazoles 5,6 and are expected to be slightly higher than those for the rearrangements in N-H compounds.

In the case of 3,5-di-tert-butyl-4-nitrosopyrazole 7, the proton exchange reaction due to annular tautomerism observed in CD2Cl2 solution is accompanied by a second dynamic process of restricted N=O rotation [97JCS(P2)721]. By comparison with the spectral behavior of the N-methyl derivative of 7, it was found that the broadening and then splitting of the tert-butyl signals observed at cooling the solution to –80°C should be primarily attributed to slowing down the proton exchange reaction. Traces of acids accelerate this process.

A notable result of the studies of kinetics of proton exchange occurring in pyrazole solutions is the high negative value of the entropy of activation: ΔS*   =   –105   J deg−1 in DMSO (79JA545). Such a value points to the complexity of the process rather than to the simple ionization reaction facilitated by a solvent [see (89PAC699)]. The most important contribution to understanding of the mechanisms of proton exchange in pyrazole and its derivatives has been made by the use of methods of high-resolution solid-state NMR spectroscopy.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065272500760053

Macromolecule–Ligand Interactions Studied By NMR

J. Feeney , in Encyclopedia of Spectroscopy and Spectrometry, 1999

Dissociation rate constants from transfer of saturation studies

If protons are present in two magnetically distinct environments, for example one corresponding to the ligand free in solution and the other to the ligand bound to the protein, then under conditions of slow exchange separate signals are seen for the protons in the two forms. When the resonance of the bound proton is selectively irradiated (saturated), its saturation will be transferred to the signal of the free proton via the exchange process and the intensity of the free proton signal will decrease. The rate of decrease of the magnetization in the free state as a function of the irradiation time of the bound proton can be analysed to provide the dissociation rate constant. This method has been used to measure the dissociation rate constant for the complexes of NADP +DHFR (20 s−1 at 284 K) and trimethoprimDHFR (6 s−1 at 298 K). 2D-NOESY/EXCHANGE type experiments can also be used for such measurements.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122266803001630

Utility practices on fault location

Md Shafiullah , ... A.H. Al-Mohammed , in Power System Fault Diagnosis, 2022

12.11.3.4 Hill-of-potential method

The hill-of-potential method is also known as the AC leakage method, the peak of potential method, and the AC earth gradient method. This method differs from the Earth Gradient method because it does not use a separate signal injection source. The test equipment for this method consists of insulated gloves or similar protective equipment, a high-impedance AC voltmeter or an auto-ranging digital VOM with high input impedance, a test cable reel, and a ground probe. This method is used on secondary distribution circuit cables that are directly buried. Also, it is applicable on circuits installed in moisture-absorbing conduit. In addition, it can be used to detect faults from the phase conductor to the earth, shorts, and some nonlinear faults by using the existing system voltage produced by the secondary of the transformer. Moreover, this method has proven effective in locating faults on street lighting circuits. The accuracy of this method is within one foot from the cable fault. However, the hill-of-potential method will not detect open conductor faults unless the source end of the conductor is also in contact with the ground and is not effective on shielded or concentric-neutral secondary cable circuits. This method is susceptible to the general earth resistivity, particularly in the area where the phase conductor is in contact with the surrounding earth. Also, this method may be challenging to use if the faulted cable is buried excessively deep or if the fault to earth on the phase conductor is located near a bare neutral conductor or metallic pipes, limiting the return current flowing in the earth. Moreover, the method cannot be used if the fault is severe enough to blow a fuse on the primary side of the transformer or the secondary cable limiters [47].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323884297000114

Radio propagation

Alan Bensky , in Short-range Wireless Communication(Third Edition), 2019

2.9.3 Polarization diversity

Fading characteristics are dependent on polarization. A signal can be transmitted and received separately on horizontal and vertical antennas to create two diversity channels. Reflections can cause changes in the direction of polarization of a radio wave, so this characteristic of a signal may be used to create two separate signal channels. Thus, cross-polarized antennas can be used at the receiver only. Polarization diversity is particularly advantageous in a portable handheld transmitter, since the orientation of its antenna will not be rigidly defined. Polarization diversity doesn't allow the use of more than two channels, and the degree of independence of each channel will usually be less than in the frequency and spatial diversity cases. However, it may be simpler and less expensive to implement and may give enough improvement to justify its use, although performance will be less than can be achieved with space or frequency diversity.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128154052000026