Как определить систематическую ошибку

Систематическая
погрешность
,
в отличие от случайной, сохраняет свою
величину (и знак) во время эксперимента.
Систематические погрешности появляются
вследствие ограниченной точности
приборов, неучета внешних факторов и
т.д.

Обычно
основной вклад в систематическую
погрешность
дает погрешность, определяемая точность
приборов, которыми производят измерения.
Т.е. сколько бы раз мы не повторяли
измерения, точность полученного нами
результата не превысит точности,
обеспеченной характеристиками данного
прибора. Для обычных измерительных
инструментов (линейка, пружинные весы,
секундомер) в качестве абсолютной
систематической погрешности берется
половина шкалы деления прибора. Так в
рассматриваемом нами случае работы N
24 величина h’
может измеряться с точностью =0.05
см,
если линейка имеет миллиметровые
деления, и =0.5
см,
если только сантиметровые.

Систематические
погрешности электроизмерительных
приборов, выпускаемых промышленностью,
определяется их классом точности,
который обычно выражается в процентах.
Электроизмерительные приборы по степени
точности подразделяются на 8 основных
классов точности:0.05, 0.1, 0.2, 0.5, 1, 1.5, 2.5, 4.
Класс
точности
есть
величина, показывающая максимально
допустимую

относительную погрешность в процентах.
Если например прибор имеет класс
точности 2, то это означает, что его
максимальная относительная погрешность
при измерении, например тока, равна 2 %,
т.е.

где

верхний предел шкалы измерений амперметра.
При этом величина
(абсолютная погрешность в измерении
силы тока) будет равна

(6)

для
любых измерений силы тока на данном
амперметре. Так как ,
вычисленное по формуле (6), это максимально
допустимая данным прибором погрешность,
то обычно считают, что для определения
,
погрешность, определяемую классом
точности прибора, нужно разделить на
два. Т.е.

и
при этом
будет так же одинакова для всех измерений
на данном приборе. Однако, относительная
погрешность (в нашем случае

где
I
показания прибора) будет тем меньше,
чем ближе значение измеряемой величины
к максимально возможному на данном
приборе. Следовательно, лучше выбирать
прибор так, чтобы стрелка прибора при
измерениях заходила за середину шкалы.

В
реальных опытах присутствуют как
систематические, так и случайные ошибки.
Пусть они характеризуются абсолютными
погрешностями
и .
Тогда суммарная погрешность опыта
находится по формуле

(7)

Из
формулы (7) видно, что если одна из этих
погрешностей мала, то ей можно пренебречь.
Например, пусть
в 2 раза больше ,
тогда

т.е.
с точностью до 12% =.
Таким образом, меньшая погрешность
почти ничего не добавляет к большей,
даже если она составляет половину от
нее. В том случае, если случайная ошибка
опытов хотя бы вдвое меньше систематической,
нет смысла производить многократные
измерения, так как полная погрешность
опыта при этом практически не уменьшается.
Достаточно произвести 2 — 3 измерения,
чтобы убедиться, что случайная ошибка
действительно мала.

В
случае рассматриваемой нами работы N
24 =0.26
см,
а
равна либо 0.05 см,
либо 0.5 см.
В этом случае

Как
видно, в первом случае можно пренебречь
,
а во втором .

Соседние файлы в папке физика

  • #
  • #

    29.03.201687.04 Кб6mekh1.doc

  • #
  • #
  • #
  • #

Statistical Methods for Physical Science

William R. Leo, in Methods in Experimental Physics, 1994

1.4.1 Systematic Errors

Systematic errors concem the possible biases that may be present in an observation. A common example is the zeroing of a measuring instrument such as a balance or a voltmeter. Clearly, if this is not done properly, all measurements made with the instmment will be offset or biased by some constant amount. However, even if the greatest of care is taken, one can never be certain that the instrument is exactly at the zero point. Indeed, various physical factors such as the thickness of the scale lines, the lighting conditions under which the calibration is pefformed, and the sharpness of the calibrator’s eyesight will ultimately limit the process, so that one can say only that the instmment has been “zeroed” to within some range of values, say 0±δ. This uncertainty in the “zero value’ then introduces the possibility of a bias in all subsequent measurements made with this instmment; i.e., there will be a certain nonzero probability that the measurements are biased by a value as large as ±δ.

More generally, systematic errors arise whenever there is a comparison between two or more measurements. And indeed, some reflection will show that all measurements and observations involve comparisons of some sort. In the preceding case, for example, a measurement is referenced to the zero point (or some other calibration point) of the instmment. Similarly, in detecting the presence of a new particle, the signal must be compared to the background events that could simulate such a particle, etc. Part of the art of experimentation, in fact, is to ensure that systematic errors are sufficiently small for the measurement at hand, and indeed, in some experiments how well this uncertainty is controlled can be the key success factor.

One example of this is the measurement of parity violation in highenergy electron-nucleus scattering. This effect is due to the exchange of a Z0 boson between electron and nucleus and manifests itself as a tiny difference between the scattering cross sections for electrons that are longitudinally polarized parallel (dσR) and antiparallel (dσL) to their line of movement. This difference is expressed as the asymme try parameter, A=(dσR-dσL)/(dσR+dσL). which has an expected value of A≈9×10-5[9].

To perform the experiment, a longitudinally polarized electron beam is scattered off a suitable target, and the scattering rates are measured for beam polarization parallel and antiparallel. To be able to make a valid comparison of these two rates at the desired level, however, it is essential to maintain identical conditions for the two measurements. Indeed, a tiny change in any number of parameters, for example, the energy of the beam, could easily create an artificial difference between the two scattering rates, thereby masking any real effect. The major part of the effort in this experiment, therefore, is to identify the possible sources of systematic error, design the experiment so as to minimize or eliminate as many of these as possible and monitor those that remain!

Systematic errors are distinguished from random errors by two characteristics. First, in a series of measurements taken with the same instrument and calibration, all measurements will have the same systematic error. In contrast, the random errors in these same data will fluctuate from measurement to measurement in a completely independent fashion. Moreover, the random emrs may be decreased by making repeated measurements as shown by Eq. (1.32). The systematic errors, on the other hand, will remain constant no matter how many measurements are made and can be decreased only by changing the method of measurement. Systematic errors, therefore, cannot be treated using probability theory, and indeed there is no general procedure for this. One must usually resort to a case by case analysis, and as a general mle, systematic errors should be kept separate from the random errors.

A point of confusion, which sometimes occurs, especially when data are analyzed and treated in several different stages, is that a random error at one stage can become a systematic error at a later stage. In the first example, for instance, the uncertainty incurred when zeroing the voltmeter is a random error with respect to the zeroing process. The *experiment here is the positioning of the pointer exactly on the zero marking and one can easily imagine doing this process many times to obtain a distribution of “zero points” with a certain standard deviation. Once a zero calibration is made, however, subsequent measurements made with the instmment will all be referred to that particular zero point and its error. For these measurements, the zero-point error is a systematic error. Another similar example is the least-squares (see Chapter 9) fitted calibration curve. Assuming that the calibration is a straight line, the resulting slope and intercept values for this fit will contain random errors due to the calibration measurements. For all subsequent measurements referred to this calibration curve, however, these errors are not random but systematic.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/S0076695X08602513

Data Reduction and the Propagation of Errors

Robert G. Mortimer, in Mathematics for Physical Chemistry (Fourth Edition), 2013

16.1.1 The Combination of Random and Systematic Errors

Random and systematic errors combine in the same way as the errors in Eq. (16.4). If εr is the probable error due to random errors and εs is the probable error due to systematic errors, the total probable error is given by

(16.5)

If you use the 95% confidence level for the random errors, you must use the same confidence level for systematic errors if you make an educated guess at the systematic error. Most people instinctively tend to estimate errors at about the 50% confidence level. To avoid this tendency, you might make a first guess at your systematic error and then double it.

Example 16.2

Assume that a length has been measured as 37.8 cm with an expected random error of 0.35 cm and a systematic error of 0.06 cm. Find the total expected error

εt=(0.35cm)2+(0.06cm)21/2=0.36cm≈0.4cm,l=37.8cm±0.4cm.

If one source of error is much larger than the other, the smaller error makes a much smaller contribution after the errors are squared. In the previous example, the systematic error is nearly negligible, especially since one significant digit is usually sufficient in an expected error.

Exercise 16.2

Assume that you estimate the total systematic error in a melting temperature measurement as 0.20 °C at the 95% confidence level and that the random error has been determined to be 0.06 °C at the same confidence level. Find the total expected error.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780124158092000161

Experimental Design and Sample Size Calculations

Andrew P. King, Robert J. Eckersley, in Statistics for Biomedical Engineers and Scientists, 2019

9.4.2 Blinding

Systematic errors can arise because either the participants or the researchers have particular knowledge about the experiment. Probably the best known example is the placebo effect, in which patients’ symptoms can improve simply because they believe that they have received some treatment even though, in reality, they have been given a treatment of no therapeutic value (e.g. a sugar pill). What is less well known, but nevertheless well established, is that the behavior of researchers can alter in a similar way. For example, a researcher who knows that a participant has received a specific treatment may monitor the participant much more carefully than a participant who he/she knows has received no treatment. Blinding is a method to reduce the chance of these effects causing a bias. There are three levels of blinding:

1.

Single-blind. The participant does not know if he/she is a member of the treatment or control group. This normally requires the control group to receive a placebo. Single-blinding can be easy to achieve in some types of experiments, for example, in drug trials the control group could receive sugar pills. However, it can be more difficult for other types of treatment. For example, in surgery there are ethical issues involved in patients having a placebo (or sham) operation.2

2.

Double-blind. Neither the participant nor the researcher who delivers the treatment knows whether the participant is in the treatment or control group.

3.

Triple-blind. Neither the participant, the researcher who delivers the treatment, nor the researcher who measures the response knows whether the participant is in the treatment or control group.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780081029398000189

Thermoluminescence Dating

L. Musílek, M. Kubelík, in Radiation in Art and Archeometry, 2000

8.2 Systematic errors

The uncertainties contributing to the systematic error originate from various sources. The first source of the systematic error is the calibration of the α source, the β source, the α counter, the potassium content measurement, the β measurement and the γ measurement. Assuming that each of these uncertainties is ±5 %, then, for the various versions of dosimetry, the error terms are:

(16a)(σ4)a2=25{fα2+(1−fα)2+(fα+fβ,Th,U+fγ,Th,U)2+(fβ,K+fγ,K)2},

(16b)(σ4)b2=25{fα2+(1−fα−fβ)2+(fα+fγ,Th,U)2+fγ,K2+fβ2},

(16c)(σ4)c2=25{fα2+(1−fα−fβ)2+(fα+fβ,Th,U)2+fβγ,K2+fγ2},

(16d)(σ4)d2=25{2fα2+fβ2+fγ2}.

Due to the observed discrepancy between the calculated (from radioactive analysis) and measured (by TLD) γ dose rates, which is estimated to ±10 %, an additional error term needs to be added:

The second source of the systematic error arises from the uncertainty of the ratio between the uranium and thorium series. The measurement by α counting gives no information about this ratio, and converting the α count-rates to dose rates depends on it, as the energy of β and γ radiation emitted per α particle differs between both series. For the uncertainty in this ratio ±50 % is assumed and it is used for various options of dosimetry:

(18a)(σ6)a2=15fβ,Th,U2+10fγ,Th,U2,

Another problem is given by the fact, that both uranium and thorium series contain one of the isotopes of radon as a member. Possible escape of this gas can influence the dose rate and can be evaluated by the measurement in a gas cell, where only particles from escaped radon are detected by a scintillator. This technique is described in [37]. However, the estimate of the escape measured in the laboratory does not necessarily correspond to the real escape rate at the sampling location. Assuming that the uncertainty of the value gs, which expresses the lost α counts for the conditions of the sample, is ±25 %, then we obtain the error term:

(19)(σ7)2=(gs/4αB)2(fα+fβ,Th,U)2+(gw/2α′)2fγ,Th,U2,

where αB is the α count rate corrected for radon escape and the second term refers to radon escape in the soil, α’ being the corrected α count rate from the soil and gw the lost counts for the soil sample (having the same wetness as in the ground).

The last important source of the systematic error is given by the uncertainty δF of the fractional water uptake F. The value of δF must be estimated from the knowledge about the conditions (rainfall, drainage, etc.) on site. This error can be approximated by:

(20)σ8=(δF/F){W(1,5fα+1,25fβ)+W′(1,15fγ)}.

W and W’ is the saturation wetness of the sample and the soil, respectively, expressed as the ratio of the saturation weight minus the dry weight and the dry weight in percent.

The overall systematic error is a combination of the contributions discussed above, i.e.:

(21)σs2=σ42+σ52+σ62+σ72+σ82,

and the overall error for the sample is given by the combination of random and systematic errors as:

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780444504876500523

Total Survey Error

Tom W. Smith, in Encyclopedia of Social Measurement, 2005

Bias, or Systematic Error

Turning to bias, or systematic error, there is also a sampling component. First, the sample frame (i.e., the list or enumeration of elements in the population) may either omit or double count units. For example, the U.S. Census both misses people (especially African-Americans and immigrants) and counts others twice (especially people with more than one residence), and samples based on the census reflect these limitations. Second, certain housing units, such as new dwellings, secondary units (e.g., basement apartments in what appears to be a single-family dwelling), and remote dwellings, tend to be missed in the field. Likewise, within housing units, certain individuals, such as boarders, tend to be underrepresented and some respondent selection methods fail to work in an unbiased manner (e.g., the last/next birthday method overrepresents those who answer the sample-screening questions). Third, various statistical sampling errors occur. Routinely, the power of samples is overestimated because design effects are not taken into consideration. Also, systematic sampling can turn out to be correlated with various attributes of the target population. For example, in one study, both the experimental form and respondent selection were linked by systematic sampling in such a way that older household members were disproportionately assigned to one experimental version of the questionnaire, thus failing to randomize respondents to both experimental forms.

Nonsampling error comes from both nonobservational and observational errors. The first type of nonobservational error is coverage error, in which a distinct segment of the target population is not included in sample. For example, in the United States, preelection random-digit-dialing (RDD) polls want to generalize to the voting population, but systematically exclude all voters not living in households with telephones. Likewise, samples of businesses often underrepresent smaller firms. The second type of nonobservational error consists of nonresponse (units are included in the sample, but are not successfully interviewed). Nonresponse has three main causes: refusal to participate, failure to contact because people are away from home (e.g., working or on vacation), and all other reasons (such as illness and mental and/or physical handicaps).

Observational error includes collection, processing, and analysis errors. As with variable error, collection error is related to mode, instrument, interviewer, and respondent. Mode affects population coverage. Underrepresentation of the deaf and poor occurs in telephone surveys, and of the blind and illiterate, in mail surveys. Mode also affects the volume and quality of information gathered. Open-ended questions get shorter, less complete answers on telephone surveys, compared to in-person interviews. Bias also is associated with the instrument. Content, or the range of information covered, obviously determines what is collected. One example of content error is when questions presenting only one side of an issue are included, such as is commonly done in what is known as advocacy polling. A second example is specification error, in which one or more essential variable is omitted so that models cannot be adequately constructed and are therefore misspecified.

Various problematic aspects of question wordings can distort questions. These include questions that are too long and complex, are double-barreled, include double negatives, use loaded terms, and contain words that are not widely understood. For example, the following item on the Holocaust is both complex and uses a double negative: “As you know, the term ‘holocaust’ usually refers to the killing of millions of Jews in Nazi death camps during World War II. Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?” After being presented with this statement in a national U.S. RDD poll in 1992, 22% of respondents said it was possible that the Holocaust never happened, 65% said that it was impossible that it never happened, and 12% were unsure. Subsequent research, however, demonstrated that many people had been confused by the wording and that Holocaust doubters were actually about 2% of the population, not 22%. Error from question wording also occurs when terms are not understood in a consistent manner.

The response scales offered also create problems. Some formats, such as magnitude measurement scaling, are difficult to follow, leaving many, especially the least educated, unable to express an opinion. Even widely used and simple scales can cause error. The 10-point scalometer has no clear midpoint and many people wrongly select point 5 on the 1–10 scale in a failed attempt to place themselves in the middle. Context, or the order of items in a survey, also influences responses in a number of quite different ways. Prior questions may activate certain topics and make them more accessible (and thus more influential) when later questions are asked. Or they may create a contrast effect under which the prior content is excluded from later consideration under a nonrepetition rule. A norm of evenhandedness may be created that makes people answer later questions in a manner consistent with earlier questions. For example, during the Cold War, Americans, after being asked if American reporters should be allowed to report the news in Russia, were much more likely to say that Russian reporters should be allowed to cover stories in the United States, compared to when the questions about Russian reporters were asked first. Even survey introductions can influence the data quality of the subsequent questions.

Although social science scholars hope that interviewers merely collect information, in actuality, interviewers also affect what information is reported. First, the mere presence of an interviewer usually magnifies social desirability effects, so that there is more underreporting of sensitive behaviors to interviewers than when self- completion is used. Second, basic characteristics of interviewers influence responses. For example, Whites express more support for racial equality and integration when interviewed by Blacks than when interviewed by Whites. Third, interviewers may have points of view that they convey to respondents, leading interviewers to interpret responses, especially to open-ended questions, in light of their beliefs.

Much collection error originates from respondents. Some problems are cognitive. Even given the best of intentions, people are fallible sources. Reports of past behaviors may be distorted due to forgetting the incidents or misdating them. Minor events will often be forgotten, and major events will frequently be recalled as occurring more recently than was actually the case. Of course, respondents do not always have the best of intentions. People tend to underreport behaviors that reflect badly on themselves (e.g., drug use and criminal records) and to overreport positive behaviors (e.g., voting and giving to charities).

Systematic error occurs during the processing of data. One source of error relates to the different ways in which data may be coded. A study of social change in Detroit initially found large changes in respondents’ answers to the same open-ended question asked and coded several decades apart. However, when the original open-ended responses from the earlier survey were recoded by the same coders who coded the latter survey, the differences virtually disappeared, indicating that the change had been in coding protocols and execution, not in the attitudes of Detroiters. Although data-entry errors are more often random, they can seriously bias results. For example, at one point in time, no residents of Hartford, Connecticut were being called for jury duty; it was discovered that the new database of residents had been formatted such that the “d” in “Hartford” fell in a field indicating that the listee was dead. Errors can also occur when data are transferred. Examples include incorrect recoding, misnamed variables, and misspecified data field locations. Sometimes loss can occur without any error being introduced. For example, 20 vocabulary items were asked on a Gallup survey in the 1950s and a summary scale was created. The summary scale data still survive, but the 20 individual variables have been lost. Later surveys included 10 of the vocabulary items, but they cannot be compared to the 20-item summary scale.

Wrong or incomplete documentation can lead to error. For example, documentation on the 1967 Political Participation Study (PPS) indicated that one of the group memberships asked about was “church-affiliated groups.” Therefore, when the group membership battery was later used in the General Social Surveys (GSSs), religious groups were one of the 16 groups presented to respondents. However, it was later discovered that church-affiliated groups had not been explicitly asked about on the earlier survey, but that the designation had been pulled out of an “other-specify” item. Because the GSS explicitly asked about religious groups, it got many more mentions than had appeared in the PPS; this was merely an artifact of different data collection procedures that resulted from unclear documentation.

Most discussions of total survey error stop at the data-processing stage. But data do not speak for themselves. Data “speak” when they are analyzed, and the analysis is reported by researchers. Considerable error is often introduced at this final stage. Models may be misspecified, not only by leaving crucial variables out of the survey, but also by omitting such variables from the analysis, even when they are collected. All sorts of statistical and computational errors occur during analysis. For example, in one analysis of a model explaining levels of gun violence, a 1 percentage point increase from a base incidence level of about 1% was misdescribed as a 1% increase, rather than as a 100% increase. Even when a quantitative analysis is done impeccably, distortion can occur in the write-up. Common problems include the use of jargon, unclear writing, the overemphasis and exaggeration of results, inaccurate descriptions, and incomplete documentation. Although each of the many sources of total survey error can be discussed individually, they constantly interact with one another in complex ways. For example, poorly trained interviewers are more likely to make mistakes with complex questionnaires, the race of the interviewer can interact with the race of respondents to create response effects, long, burdensome questionnaires are more likely to create fatigue among elderly respondents, and response scales using full rankings are harder to do over the phone than in person. In fact, no stage of a survey is really separate from the other stages, and most survey error results from, or is shaped by, interactions between the various components of a survey.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0123693985001262

Part 1

D. DELAUNAY, in Advances in Wind Engineering, 1988

Observations errors

To test the effects of possible systematic errors of observation on ΔP, R, and T, the values of the parameters of observed cyclones have been increased, in succession, by 10% for ΔP and T and 20% for R. Similarly, it may be feared that all the cyclones which have crossed the area in question were not listed. Simulation was therefore carried out with an average value of NC increased by 10%. It appears that these modifications result in an increase of the values of V50 and V1000 not exceeding 1.5 m/s, except for ΔP, for which a systematic over-evaluation of 10% leads to an increase of V50 and V1000 between 2 and 2.5 m/s.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978044487156550014X

Model Evaluation and Enhancement

Robert Nisbet Ph.D., … Ken Yale D.D.S., J.D., in Handbook of Statistical Analysis and Data Mining Applications (Second Edition), 2018

Evaluation of Models According to Random Error

We can express the total of the random error and systematic error mathematically, but it is very difficult to distinguish between them in practice. For example, the general form of a regression model is

(11.2)Y=a+b1X1+b2X2+b3X3+⋯+bnXn+Error

where a is the slope intercept, X-values are the predictor variables, and b-values are the coefficients associated with each X-value.

If the signal in the data set is faint, the error term will be relatively large. If the signal in the data is strong, the error will be relatively small. Unfortunately, the error term in Eq. (11.1) is a combination of random error and model error. Most model performance metrics do not distinguish between random error and model error. But there are some techniques that can be used to measure model error to some extent and correct for it. We will begin by discussing model performance metrics, which express the total combined error. Later in the chapter, we will present some common techniques for assessing model error and show some ways to correct for it (partially).

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780124166325000116

Quantum Entanglement and Information Processing

J.A. Jones, in Les Houches, 2004

4.2 Composite rotations

The use of composite rotations to reduce the effects of systematic errors in conventional NMR experiments relies on the fact that any state of a single isolated qubit can be mapped to a point on the Bloch sphere, and any unitary operation on a single isolated qubit corresponds to a rotation on the Bloch sphere. The result of applying any series of rotations (a composite rotation) is itself a rotation, and so there are many apparently equivalent ways of performing a desired rotation. These different methods may, however, show different sensitivity to errors: composite rotations can be designed to be much less error prone than simple rotations!

A rotation can go wrong in two basic ways: the rotation angle can be wrong or the rotation axis can be wrong. In an NMR experiment (viewed in the rotating frame) ideal RF pulses cause rotation of a spin through an angle θ = ω1 t around an axis in the xy-plane. So called pulse length errors occur when the pulse power ω1 is incorrect, so that the flip angle θ is systematically wrong by some fraction. This can be due to experimenter carelessness, but more usually arises from the inhomogeneity in the RF field over a macroscopic sample. The second type of error, off-resonance effects (Fig. 6), occur when the excitation frequency doesn
S⌣t match the transition frequency, so that the Hamiltonian is the sum of RF and off-resonance terms. This results in rotations around a tilted axis, and the rotation angle is also increased.

Fig. 6. Effect of applying an off-resonance 180° pulse to a spin with initial state Iz; the spin rotates around a tilted axis. Trajectories are shown for small, medium and large off-resonance effects.

The first composite rotation [47] was designed to compensate for pulse length errors in an inversion pulse, that is a pulse which takes the state Iz to − Iz. This can be achieved by, for example, a simple 180° pulse, but this is quite sensitive to pulse length errors. The composite rotation 90°x180°y90°x has the same effect in the absence of errors, but will also partly compensate for pulse length errors. This is shown in Fig. 7 which plots the inversion efficiency of the simple and composite 180° pulses as a function of the fractional pulse length error g. (The inversion efficiency of an inversion pulse measures the component of the final spin state along −Iz after the pulse is applied to an initial state of Iz.)

Fig. 7. The inversion efficiency of a simple 180° pulse (dashed line) and of the composite pulse 90°x 180°y 90°x. (solid line) as a function of the fractional pulse length error g. The way in which the composite pulse works can be understood by examining trajectories on the Bloch sphere, which are shown on the right for three values of g.

Composite pulses of this kind are very widely used within conventional NMR, and many different pulses have been developed [48], but most of them are not directly applicable to quantum computing [50]. This is because conventional NMR pulse sequences are designed to perform specific motions on the Bloch sphere (such as inversion), in which case the initial and final spin states are known, while for quantum computing it is necessary to use general rotations, which are accurate whatever the initial state of the system. Perhaps surprisingly composite pules are known which have the desired property, of performing accurate rotations whatever the initial spin state.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/S0924809903800343

Probability, Statistics, and Experimental Errors

Robert G. Mortimer, in Mathematics for Physical Chemistry (Fourth Edition), 2013

Abstract

Every measured quantity is subject to experimental error. The two types of experimental error are systematic errors and random errors. Systematic errors must usually be estimated by educated guesswork. Random errors are assumed to be a sample from a population of many imaginary replicas of the experiment. Such a population is assumed to be governed by probability theory. Mathematical statistics is used to infer the properties of a population from a sample. Random errors can be treated statistically if the measurement can be repeated a number of times. The mean of a set of repeated measurements is a better estimate of the correct value of a variable than is a single measurement.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978012415809200015X

The B → D*ℓν Form Factor at Zero Recoil and the Determination of |Vcb|

J.N. Simone, … S.M. Ryan, in Proceedings of the 31st International Conference on High Energy Physics Ichep 2002, 2003

2 DOUBLE RATIO METHOD

We consider three double ratios of hadronic matrix elements for which the bulk of statistical and systematic errors are expected to cancel. From these ratios we extract three zero recoil form factors:

(3)〈D|v0|B〉  〈B|v0|D〉〈D|v0|D〉  〈B|v0|B〉⇒   | h+(1)|2

(4)〈D∗|v0|B∗〉  〈B∗|v0|D∗〉〈D∗|v0|D∗〉  〈B∗|v0|B∗〉⇒   | h1(1)|2

(5)〈D∗|A1|B〉  〈B∗|A1|D〉〈D∗|A1|D〉  〈B∗|A1|B〉⇒   | HA1(1)|2

Form factor h+ is one of two form factors contributing to BDℓν decays[8]. Note that the third ratio yields HA1 and not hA1.

Form factors h+, h1 and HA1 depend, respectively, upon parameters P, ℓV and A and have quark mass dependence:

(6)1−|hX(1)|Δ2=ℓx−ℓx[3](12mc+12mb)+…

where Δ=(12mc−12mb). The bare charm and bottom quark masses are inputs in lattice QCD. We compute double ratios for a range of “charm” and “bottom” quark masses. After matching the lattice theory to HQET, we determine all the x as well as the order 1/mQ3 coefficients ℓx[3] by studying the mass dependence of the form factors. These long-distance coefficients are combined as in Equation 2 to give our determination of hA1(1).

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780444513434501484

Систематические погрешности при повторных измерениях остаются постоянными или изменяются по определенному закону.

Когда судят о погрешности, подразумевают не значение, а интервал значений, в котором с заданной вероятностью находится истинное значение. Поэтому говорят об оценке погрешности. Если бы погрешность оказалась измеренной, т.е. стали бы известны её знак и значение, то её можно было бы исключить из действительного значения измеряемой физической величины и получить истинное значение.

Для получения результатов, минимально отличающихся от истинного значения измеряемой физической величины, проводят многократные наблюдения и проводят математическую обработку полученного массива с целью определения и минимизации случайной составляющей погрешности.

Минимизация систематической погрешности в процессе наблюдений выполняется следующими методами: метод замещения (состоит в замещении измеряемой величины мерой), метод противопоставления (состоит в двух поочерёдных измерениях при замене местами меры и измеряемого объекта), метод компенсации погрешности по знаку (состоит в двух поочерёдных измерениях, при которых влияющая величина становится противоположной).

При многократных наблюдениях возможно апостериорное (после выполнения наблюдений) исключение систематической погрешности в результате анализа рядов наблюдений. Рассмотрим графический анализ. При этом результаты последовательных наблюдений представляются функцией времени либо ранжируются в порядке возрастания погрешности.

Рассмотрим временную зависимость. Будем проводить наблюдения через одинаковые интервалы времени. Результаты последовательных наблюдений являются случайной функцией времени. В серии экспериментов, состоящих из ряда последовательных наблюдений, получаем одну реализацию этой функции. При повторении серии получаем новую реализацию, отличающуюся от первой.

Реализации отличаются преимущественно из-за влияния факторов, определяющих случайную погрешность, а факторы, определяющие систематическую погрешность, одинаково проявляются для соответствующих моментов времени в каждой реализации. Значение, соответствующее каждому моменту времени, называется сечением случайной функции времени. Для каждого сечения можно найти среднее по всем реализациям значение. Очевидно, что эта составляющая и определяет систематическую погрешность. Если через значения систематической погрешности для всех моментов времени провести плавную кривую, то она будет характеризовать временную закономерность изменения погрешности. Зная закономерность изменения, можем определить поправку для исключения систематической погрешности. После исключения систематической погрешности получаем «исправленный ряд результатов наблюдений».

Известен ряд способов исключения систематических погрешностей, которые условно можно разделить па 4 основные группы:

  •  устранение источников погрешностей до начала измерений;
  •  исключение почетностей в процессе измерения способами замещения, компенсации погрешностей по знаку, противопоставления, симметричных наблюдений;
  •  внесение известных поправок в результат измерения (исключение погрешностей начислением);
  •  оценка границ систематических погрешностей, если их нельзя ис­ключить.

По характеру проявления систематические погрешности подразделяют на постоянные, прогрессивные и периодические.

Постоянные систематические погрешности сохраняют свое значение в течение всего времени измерений (например, погрешность в градуировке шкалы прибора переносится на все результаты измерений).

Прогрессивные погрешности – погрешности, которые в процессе из­мерении подрастают или убывают (например, погрешности, возникающие вследствие износа контактирующих деталей средств измерения).

И группу систематических погрешностей можно отнести: инструментальные погрешности; погрешности из-за неправильной установки измерительного устройства; погрешности, возникающие вследствие внешних влияний; погрешности метода измерения (теоретические погрешности); субъективные погрешности.

Понравилась статья? Поделить с друзьями:
  • Как определить ошибку квантования
  • Как определить речевые ошибки
  • Как определить ошибку диска
  • Как определить процент ошибки
  • Как определить ошибку двигателя на гранте