|Vub| with Shape Functions
==========================


In  order to  see  whether and  how  much consensus  there  is in  the
"community" I sent out a mail: 

> we write this  mail for HFAG (http://www.slac.stanford.edu/xorg/hfag/)
> to  ask for  your advice  on the  extraction of  |Vub|  from charmless
> semileptonic B decays.   Any comment you'd like to  share is very much
> welcome (and even a very short reply of e.g. "yes!, no!, no!, ???"  to
> the four questions below will help us).
>
> All experiments  rely on deFazio/Neubert as MC  generator covering the
> full   phase  space   to  determine   experimental   efficiencies  and
> corrections and sometimes  also to extrapolate to the  full rate. Here
> the shape  function is parametrized  with the "exponential"  form with
> two parameters  "mb" and "a".  So  far, the central  values and errors
> for  these   parameters  have  been  determined   with  two  different
> approaches (illustrated in figure 4 of hep-ex/0402009)
>
>  o Use the B -> s gamma  contour (from CLEO). The contour is large and
>    dominated by  statistical errors.  BELLE now has  a new measurement
>    of the b-> s gamma photon energy spectrum.
>
>  o  Use  the moments  of  b  -> c  l  nu  decays  (and removing  terms
>    proportional to alpha_s^2 and  1/m_b^3). This assumes that the HQET
>    parameters are directly related to the shape function parameters.
>
> It would  be highly  desirable if all  experiments (BABAR,  BELLE, and
> CLEO) used  a common set  of "shape function"  parametrizations, their
> parameters and  errors on  these parameters. For  the HFAG  average of
> |Vub|, the first approach was taken (after much discussion)1.
>
> What is  the best way to  extract |Vub| from  charmless semileptonic B
> decays?
>
> 1.  Combine   all  information  possible  from   B->s  gamma.  Include
>    semi-exclusive measurements as well (e.g. hep-ex/0207074)?
>
> 2.  Use  b->s  gamma  and  additionally  the  constraint  on  mb  from
>    semileptonic  B  decays, i.e.   mb  =  m_B  - \bar{\Lambda},  where
>    \bar{\Lambda} is  the HQET parameter  and mb is the  shape function
>    parameter.
>
> 3. Use semileptonic moments. If the above procedure is not correct, do
>    you have a proposal for an alternative?
>
> 4. Do  something else!  what?   (apart from using  approaches "without
>    recourse to  structure functions";  those methods will  be pursued,
>    but  here  we'd like  to  get  feedback on  how  to  use the  shape
>    function).
>

Below are in a random order  answers I have received so far. This will
be updated.


----------------------------------------------------------------------
From: Michael Luke 
To: ursl@physi.uni-heidelberg.de
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Tue, 8 Jun 2004 15:30:10 +0200

Hi Urs,

I dislike using moments to constrain the shape function because the 
parameterization of the shape function in terms of lambda-bar and 
lambda-1 is just a model, as you know, and so there is no real 
correlation between the uncertainties in the parameters lambda-bar and 
lambda-1 with the actual uncertainty in the shape function.  Similarly, 
anything that gives you mb has the same problem - i.e. you could know 
mb infinitely well, but still not know the shape function.  So I think 
your options 1-3 all have the same problem.
I'm not sure if this is one of the options, but I would take a 
parameterization of the shape function, and then look at the 
corresponding prediction for the B->s gamma photon spectrum ... I would 
then look at the spread in those parameters which still provides an 
acceptable fit to the B->s gamma spectrum, and use that to determine 
the uncertainty in the shape function, from which I would get the 
uncertainty in Vub ...

Cheers

Mike


From: Michael Luke 
To: ursl@physi.uni-heidelberg.de
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Tue, 8 Jun 2004 16:17:38 +0200


On Jun 8, 2004, at 3:22 PM, Urs Langenegger wrote:

>  I  think that  your proposal  amounts to
> option 1 (using the B-> s gamma photon energy spectrum).
>

could be - i thought that was referring to using moments of b->s gamma 
to get lambda-bar, which is not the same thing ... i am talking about a 
fit to the full functional form of the spectrum ... i suppose as a 
practical matter they may give very similar results ...

mike


----------------------------------------------------------------------
From: Zoltan Ligeti 
To: ursl@physi.uni-heidelberg.de
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Tue, 8 Jun 2004 10:18:26 -0700 (PDT)

Dear Urs,

I talked to Mike and have seen his email to you, we are both at MPI in
Munich right now.  I basically agree with what he said.  I am quite
worried whether a two-parameter form of the shape function can give a
reasonable description of it, because in practice what matters is really
not the integral across the whole shape function but mostly the part of
it that corresponds to large photon energies in b->s gamma.  this is
clear if you look at the relation between b->s gamma and the E_l or P_+
spectra; for the m_X spectrum the formulae indicate that you integrate
across the whole thing, but in reality the sensitivity is much greater
at the high end.  So, while I think that the b->c moments can ultimately
constrain moments of the shape function, I think the shape function
contains more physics, and so fixing all parameters of a model using
b->c measurements seems unjustifiable to me.  So I like Mike's proposal
(or one could introduce more parameters to the models, and then i would
not feel uneasy about fixing some from b->c, but leaving others float).

Cheers,  Zoltan


----------------------------------------------------------------------
From: "Ugo.Aglietti" 
To: Urs Langenegger 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Wed, 9 Jun 2004 15:12:53 +0200 (MET DST)

Dear Urs, it is quite some time that I do not follow the discussions 
about the shape function, but I think that probably the best approach
would be your #1: to look at the photon spectrum.
It is clear that, in principle, the cleanest approach is that of
looking at ratios of spectra in which the shape function effects drop out,
as the latter is a (theoretically completely unknown) non-perturbative
function. But in practise, as far as I can understand it, for
experimental analysis - cuts, efficiencies, etc. - you also need the
complete differential distributions, which require model input
information. 
In collaboration with Giulia Ricciardi, we are presently trying 
to model the shape function is a simple analytical way, consistent
with (next-to-leading-order) resummed perturbation theory.
This way one can easily compare with photon spectrum in b->s gamma 
or with triple/double/single differential distributions in semileptonic
b->u decays. If the model is in agreement with the data, one can 
trust all of its predictions for spectra, moments, etc.
If the model is not compatible with the data, that means one has
to derive from data a kind of universal correction factor for the 
shape function. Once the latter is fixed once and for all, any kind
of predictions can again be done.
That is our philosophy; if you are interested, we can contact you 
once our computations are complete.
Ciao
Ugo


----------------------------------------------------------------------
From: Matthias Neubert 
To: Urs Langenegger 
cc: Matthias Neubert 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Thu, 10 Jun 2004 11:39:34 -0400 (EDT)

Hello Urs,

Below are my answers to your questions. A really satisfactory
treatment will have to await a better understanding of 1/mb
corrections from subleading shape functions. This is needed
to (one day) make a generator that could replace the work Fulvia
DeFazio and I did.

> What is  the best way to  extract |Vub| from  charmless semileptonic B
> decays?
> 
> 1.  Combine   all  information  possible  from   B->s  gamma.  Include
>    semi-exclusive measurements as well (e.g. hep-ex/0207074)?

B->Xs+gamma remains an excellent way to constrain the shape function.
...

> 2.  Use  b->s  gamma  and  additionally  the  constraint  on  mb  from
>    semileptonic  B  decays, i.e.   mb  =  m_B  - \bar{\Lambda},  where
>    \bar{\Lambda} is  the HQET parameter  and mb is the  shape function
>    parameter.
> 
> 3. Use semileptonic moments. If the above procedure is not correct, do
>    you have a proposal for an alternative?

As far as I see, these two questions are related. I believe we have
definitively answered this point in hep-ph/0402094. We now know exacytly
how to related shape-function parameters (but not the onces in De Fazio
+ me) to HQET parameters.

...

Best regards,

Matthias


----------------------------------------------------------------------
From: Mark Wise 
To: Urs Langenegger 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Thu, 10 Jun 2004 15:51:35 -0700 (PDT)

Dear Urs,

I havn't thought about this in a while so take my remarks with a grain of
salt. I would guess you want to detemine the HQET parametes like m_b and
lambda_1 from charmed b semileptonic decays (and weak radiative
decays which are very sensitive to m_b) where there is more data and use
them in
the prediction for the charmless semileptonic rate in some region of phase
space. The
large q^2 and large E_e region has no shape function so you don't need
b_s+gamma. On the other hand many other phase space cuts that may give
smaller experimental errors do need the shape
function which can be extracted from the weak radiative decay. I am not
sure
what parametrization would be best. One approach that directly relates the
energy spectra without extracting the shape function is that of rothstein
and
leibovich but i don't see a problem with extracting a shape function using
some parametrization as long as you check the results don't depend too
strongly on the parametrization used. I lambda_1 which you get from b
to c semileptonic decays does determine some of
the shape function (a particular moment if QCD corrections are left
out) but there should be some other free parameters as well
otherwise the ansatz for the shape is too restrictive.

Mark


----------------------------------------------------------------------
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Wed, 09 Jun 2004

Hi Urs,

I'm sorry I don't have any more time to think about this right now, but 
since you're letting me be brief...

> 1.  Combine   all  information  possible  from   B->s  gamma.  Include
>    semi-exclusive measurements as well (e.g. hep-ex/0207074)?

Yes, but I would not use semi-exclusive measurements.  The theory only 
really works for truly inclusive quantities.  (In other words, I 
wouldn't know how to estimate errors on anything less than a truly 
inclusive measurement.)

> 2.  Use  b->s  gamma  and  additionally  the  constraint  on  mb  from
>    semileptonic  B  decays, i.e.   mb  =  m_B  - \bar{\Lambda},  where
>    \bar{\Lambda} is  the HQET parameter  and mb is the  shape function
>    parameter.

Yes, with the earlier caveats on not using semi-inclusive b -> s gamma.

> 3. Use semileptonic moments. If the above procedure is not correct, do
>    you have a proposal for an alternative?

Semileptonic moments is fine.

> 4. Do  something else!  what?   (apart from using  approaches "without
>    recourse to  structure functions";  those methods will  be pursued,
>    but  here  we'd like  to  get  feedback on  how  to  use the  shape
>    function).

I wish I had a better idea than is in the literature!

> We'd like  to hear your  opinion!  (Please let  us know in  case you'd
> prefer us to keep your reply private.)

...


----------------------------------------------------------------------
From: Thomas Mannel 
To: ursl@physi.uni-heidelberg.de
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Sun, 13 Jun 2004 21:13:09 +0200

Hallo Urs,

I am sorry for the slow reply but we had a local holyday here in
germany, and I was ou of town for a few days.

 > 1.  Combine   all  information  possible  from   B->s  gamma.  Include
 >  semi-exclusive measurements as well (e.g. hep-ex/0207074)?

I think this is the best one can do, although I would not include
semi-inclusive data, sicne this is not so easy to do theoretically,
which means that it is hard(er) to assign an uncertainty.

>  2.  Use  b->s  gamma  and  additionally  the  constraint  on  mb  from
>    semileptonic  B  decays, i.e.   mb  =  m_B  - \bar{\Lambda},  where
>    \bar{\Lambda} is  the HQET parameter  and mb is the  shape function
>    parameter.

This problem reflects a problem in the Neubert deFazio approach: As
far as I remember they include QCD radiative corrections, in which
case you have o define what you mean with m_b. LambdaBar from
semileptonic decays also depends on the definition of m_b, so it is
not obvious how the shape function m_b in Neubert DeFazio is related
to this.


 > 3. Use semileptonic moments. If the above procedure is not correct, do
>    you have a proposal for an alternative?

Using the semileptonic moments assumes that the moments of the shape
function are related to the HQET parameters, which is only true at
tree level.  Matthias Neubert recently wrote a paper looking at the
relation between the shape function moments and the HQET parameters,
but I do not think that he solved the problem.

>
> 4. Do  something else!  what?   (apart from using  approaches "without
>    recourse to  structure functions";  those methods will  be pursued,
>    but  here  we'd like  to  get  feedback on  how  to  use the  shape
>    function).

I still think that the direct comparison between the inclusive
spectrum of b-->s gamma and b-->u ell nu is the best one can do, one
can even include QCD corrections.  Matthias Neubert reinvented an
interesting variable (a sudent of mine, Stefan Recksiegel, looked at
this a couple of years ago) in which the comparison should work more
easily, incuding perturbative QCD corrections.

One more general remark: There has been quite some discussion on how
to include perturbative QCD consistently into the shape function
discussion.  I do not think that this problem is solved in a
satisfactory way; it may happen that SCET will help clarify the
situation eventually.

I hope this helps a little bit,
Thomas


----------------------------------------------------------------------
From: Stefan Bosch 
To: ursl@physi.uni-heidelberg.de
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Mon, 14 Jun 2004 17:41:45 -0400

Hallo Urs,

...

All my comments are based mainly on our two recent articles 
hep-ph/0402094 and hep-ph/0403223.

> We write this  mail for HFAG (http://www.slac.stanford.edu/xorg/hfag/)
> to  ask for  your advice  on the  extraction of  |Vub|  from charmless
> semileptonic B decays.   Any comment you'd like to  share is very much
> welcome (and even a very short reply of e.g. "yes!, no!, no!, ???"  to
> the four questions below will help us).
>
> All experiments  rely on deFazio/Neubert as MC  generator covering the
> full   phase  space   to  determine   experimental   efficiencies  and
> corrections and sometimes  also to extrapolate to the  full rate.

This is actually not ideal, but right now probably the best way to go.  
DeFazio/Neubert do not describe the shape function region relevant for 
the b -> u semileptonic decay in a state-of-the-art manner (e.g. 
including resummation effects). However, there is unfortunately not yet 
a MC generator available that covers the full phase space. We hope to 
provide something like this in the future.

> Here
> the shape  function is parametrized  with the "exponential"  form with
> two parameters  "mb" and "a".  So  far, the central  values and errors
> for  these   parameters  have  been  determined   with  two  different
> approaches (illustrated in figure 4 of hep-ex/0402009)

We suggest to include the new knowledge of the asymptotic form of the 
shape function and the relation of the renormalized shape function with 
HQET parameters. The latter property leads to the definition of a new, 
physical scheme for the running heavy-quark mass, the so-called 
shape-function mass (see eq. (70) in hep-ph/0402094 for the relation to 
the PS and kinetic masses). A generalization of the "exponential" form 
including the asymptotic behavior has been given in section 9.2 of 
hep-ph/0402094.

>  o Use the B -> s gamma  contour (from CLEO). The contour is large and
>    dominated by  statistical errors.  BELLE now has  a new measurement
>    of the b-> s gamma photon energy spectrum.

This is probably the preferred method. However, there is not yet a 
complete treatment of the b->s gamma decay available, including e.g. 
resummation effects and mixing of operators. This is work in progress.

>  o  Use  the moments  of  b  -> c  l  nu  decays  (and removing  terms
>    proportional to alpha_s^2 and  1/m_b^3). This assumes that the HQET
>    parameters are directly related to the shape function parameters.

They are if one uses a physical mass scheme like the shape-function 
mass scheme. At the moment this might give smaller errors.

> It would  be highly  desirable if all  experiments (BABAR,  BELLE, and
> CLEO) used  a common set  of "shape function"  parametrizations, their
> parameters and  errors on  these parameters. For  the HFAG  average of
> |Vub|, the first approach was taken (after much discussion)1.
>
> What is  the best way to  extract |Vub| from  charmless semileptonic B
> decays?

One of the main questions is which cut you want to use to eliminate the 
charm background. We suggest the  P_+ cut introduced in hep-ph/0402094 
and hep-ph/0403223.  The main advantages compared to the hadronic 
invariant mass cut (they have very similar efficiencies) are the 
simpler construction of shape-function independent relations to the 
photon spectrum in B -> X_s gamma and the fact that the P_+ spectrum 
can be evaluated within a systematic framework, which makes the 
calculation of power corrections feasible. The lepton energy cut is 
theoretically least favored because of its low efficiency which makes 
it subject to theoretical errors. However, it's the experimentally 
easiest accessible.


> 1.  Combine   all  information  possible  from   B->s  gamma.  Include
>    semi-exclusive measurements as well (e.g. hep-ex/0207074)?

Probably not yet - lacking the thorough theoretical investigation of b 
-> s gamma (see above). But the mean photon energy might be very 
helpful.

> 2.  Use  b->s  gamma  and  additionally  the  constraint  on  mb  from
>    semileptonic  B  decays, i.e.   mb  =  m_B  - \bar{\Lambda},  where
>    \bar{\Lambda} is  the HQET parameter  and mb is the  shape function
>    parameter.

Yes, using the shape-function mass scheme

> 3. Use semileptonic moments. If the above procedure is not correct, do
>    you have a proposal for an alternative?

Yes, using the shape-function mass scheme

> 4. Do  something else!  what?   (apart from using  approaches "without
>    recourse to  structure functions";  those methods will  be pursued,
>    but  here  we'd like  to  get  feedback on  how  to  use the  shape
>    function).

Employ the P_+ cut introduced in hep-ph/0402094 and hep-ph/0403223 (see 
above).

...

Herzliche Gruesse,

Stefan


----------------------------------------------------------------------
From: Paolo Gambino 
To: Urs Langenegger 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Fri, 18 Jun 2004 12:32:41 +0200 (CEST)

Dear Urs, 

sorry for the delay. Here a few comments and answers to your questions.

First a general comment. Non-perturbative parameters like the b quark 
mass, lambda_1 etc. are universal quantities, i.e. if they are properly 
defined they measure intrinsic properties of the B meson which can be 
probed in many different exp observables.
It follows that, if one proceeds carefully, it is best to combine ALL 
experimental constraints on these quantities. 
Assuming that the moments of the shape function are related to mb, 
mu_pi^2 etc, the shape function to be used in Vub extractions MUST
satisfy all those constraints.

Recently, however, Bauer & Manohar have argued that after inclusion of QCD 
corrections the moments of the shape function are NOT directly related to 
the matrix elements of local operators like mu_pi^2 etc. 
In a paper based on the same SCET/HQE inspired formalism, Bosch et al.
subsequently pointed out that the moments of an appropriately 
renormalized shape function  are still related to properly defined 
non-perturbative parameters.

I haven't studied in detail these papers and  I'll refrain 
from definite statements. But I should stress that the perturbative 
definition of the shape function is an old and crucial problem (pioneer 
and unrecognized work was done by Aglietti). It is 
interesting to see now that it is closely related (according to Bosch et 
al) to the proper definition of mu_pi^2 etc. In fact Neubert's group 
advocates a variant of the "kinetic scheme" definition.

Even taking for granted Bosch et al, some subtleties arise and 
the theory picture is still moving, so you may want to be careful. But my 
impression is that 

1) a meaningful relation between HQE parameters and global shape function
properties exists and should be used in the analysis of data

2) Neubert-De Fazio is obsolete and should be dropped. (this, BTW, is 
something on which Bauer and Manohar certainly agree). Bosch et al. have a 
new recipe, which could be a good starting point for new analyses.

You also  suggest that an alternative way is to use radiative 
decays only (as opposed to using also s.l. moments to constrain the 
shape function). Assuming there were no way to relate the shape 
function to HQE parameters, which I doubt, this could be safer. But we 
have to be realistic on the size of effects one can expect: the use of the 
CLEO data only in fig.4 of hep-ex/0402009, with an error on mb of 200 MeV, 
is not conservative, it is ridicolous (and misleading, given the very 
high cut, see Bigi-Uraltsev). Moreover, how can you use that 
method and then employ De Fazio-Neubert? If DF-N were correct, we could simply
use the best determinations of lambda1 and mb_pole. 

Finally, concerning your point 4: there are ways to check quantitatively 
the impact of certain assumptions (Neubert-De Fazio, subleading shape 
functions etc) and in general our understanding of the shape function 
region. One  strategy has been proposed in

A NEW MODEL INDEPENDENT WAY OF EXTRACTING |V(UB) / V(CB)|.
By Ugo Aglietti , Marco Ciuchini, Paolo 
Gambino,.  Nucl.Phys.B637:427-444,2002, hep-ph/0204140  

where a ratio of radiative and semileptonic spectra is built. It is 
a short distance quantity, free of Sudakov logs, calculated at NLO in 
QCD, which can be studied at different cuts. It is simple and useful.
Similar relations can be found in Bosch et al., hep-ph/0402094.

I hope this can help. As soon as I have studied the new papers more 
carefully I might have additional comments. 

best regards,
Paolo


----------------------------------------------------------------------
From: Ian Low 
To: Urs Langenegger 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Sun, 20 Jun 2004 00:32:47 -0400 (EDT)


Dear Urs,

Sorry for the late reply; I'd been traveling. Here are some thoughts on
the questions you asked after some quick thinking. Please be aware that my
current research focus is not on B-physics anymore and, therefore, my
knowledge on these issues might not be up-to-date.

I understand that in practice it'd be highly desirable for all
experimental groups to have to have a common set of "shape function"
parametrizations. However, from the theoretical point of view, I think it
is very difficult to give a proper interpretation of these
parametrizations.

As an example, you mentioned in the email that you use deFazio/Neubert as
MC generator to cover the full phase space. A recent paper
(hep-ph/0312366) by Burrell, Luke, and Williamson showed that, beyond
leading order in shape function, it is not really sensible to naively
convolute the parton model result (the alpha_s corrections) with the
non-perturbative corrections (shape function), as De Fazio and Neubert
did, for the prescription breaks down when including subleading shape
function. Moreover, corrections from these subleading shape functions are
quite large. In the above paper the authors used three different
parametrizations and found quite different hadronic mass spectrum for the
charmless semileptonic decays. (cf. Fig. 5 in that paper.)

Of course, one can just go ahead and pick one particular parametrization.
But then it's not clear (at least to me!) how to argue one parametrization
is better than the other. The keypoint is the shape function is sensitive
to an infinite number of HQET parameters. Thus knowing only a few moments,
eg \bar{\Lambda}, isn't really telling us much of the story. You can go
ahead and pick one way to parametrize shape functions, but don't lose
sleep worrying if you've picked the best one; there are none in my
viewpoint. And as long as you understand any parametrization of shape
function simply amounts to introducing model dependence, there is nothing
wrong in picking an easy way of parametrizing the shape function.

Having said all these, let me bring to your attention another interesting
conclusion drawn in Burrell et al's paper: even though there are large
corrections coming from the subleading shape function, the corrections
largely cancel when convoluting the semileptonic decay with the B ->
s\gamma rate (the without recourse to shape function approach).

I understand that these thoughts might not be useful in helping you
determine which parametrization to use. But the message I'm trying to
convey here is, since these parametrizations are not coming from first
principle, it is very difficult to argue that one is better than the
other. So you may as well pick one that will make life easy. A
theoretically meaningful error awaits the determination without modeling
the shape function.

Best,
Ian


----------------------------------------------------------------------
From: ibigi@nd.edu
To: ursl@physi.uni-heidelberg.de,
   Riccardo Faccini 
Cc: Nikolai Uraltsev 
Subject: Re: |Vub|: "Shape function" parameters and errors?
Date: Tue, 22 Jun 2004 17:55:02 -0500

Gentlemen, 

since Riccardo and I will talk on the phone Wednesday before noon and since I
believe a 
multistep dialogue will be needed to clarify the issues, I decided to send you
some comments 
right away -- even if you might get the impression that it went straight from
my
`hip' to my lips without hardly any detour via my head. At the same time I
must warn you that I feel very frustrated: I had just typed a long message to
you for one hour -- then the screen flickered, and the message just disappeared
-- I have no clue why. I will send you a much shorter version now. 
Anyway: 
(1) It is one of the great strengths of the OPE that it allows to express a
host
of transition rates in terms of the expectation values of a `universal' cast of
operators, albeit with different 
coefficients/weights. Once you determine these heavy quark parameters in one
reaction -- like from the moments of SL b-> c decays -- you can use them
everywhere -- with the caveats mentioned below. Everyone except for Sheldon
Stone understands that. 
(2) The BABAR analysis yielded for the combination m_b - 0.7 m_c, which
controls
the low moments of b->c, an uncertainty of merely +/- 17 MeV, while for m_b,
which controls b->u, 
an uncertainty of +/- 70 MeV. Yet once the higher mass moments have been
measured with (even) better accuracy, then the error on m_b can be reduced
further. 
(3) It is a sign of progress that everyone understands that the OPE plays a
central role here. 
However there are -- to put it politely -- different implementations of the
OPE.For there is no quark mass etc. `per se' in a quantum field theory. These
heavy quark parameters have to be 
defined in a way that passes full muster by quantum field theory. This is
certainly achieved in the `kinetic' scheme, which is a very robust one. The
numbers quoted above refer to it. 
(4) I think it is fair to say -- I think I am actually polite here -- that the
HQET scheme is much less robust. I would not rule out that the HQET activists 
will somehow reproduce our successful description in the future -- we will see.
But this has not happened yet. 
(5) At present results from B-> s gamma -- the photon spectrum and its moments
-- can serve as a cross check -- but they should not be part of the primary
analysis as Kolya and I have stressed at various places in the literature. For
as pointed out in our PLB and in the memo we sent together with Don Benson to
BABAR last December, a high lower cut on the photon energy introduces a bias
and can even invalidate the OPE treatment due to a insufficient hardness. Even
at E_{cut} = 2 GeV those biases are quite significant. We are just in the
process of writing up a much more detailed analysis of these effects and the
expected photon spectrum. In this context please notice that the cut on the
photon energy quoted as 1.8 GeV by BELLE in their analysis refers to the Y(4S)
frame -- not the B rest frame where we do our calculations. The BELLE cut in
the B rest frame is closer to 1.94 GeV, as Oliver was told by Koppenburg. 
(6) Things get even trickier when using quark distribution functions, which
contain some model elements. One has to be very careful how to relate the model
parameters to the heavy quark parameters of the OPE. It can be done -- but I do
not think most people apply sufficient care here. We have discussed that in our
2002 paper on the photon spectrum; our new results will allow us to update that
very soon. There we also explain in considerable detail how best to extract
V(ub). 
(7) Allow me a last comment for now. I can sympathize with the experiments'
desire to have to deal with only a single -- i.e. `catholic' -- shape function.
However there is a significant danger in that. Truth in science is not based on
a short-term majority vote. If it had been up to a majority vote, the `kinetic'
scheme would have never been used by BABAR -- you know that as well as I do.
Secondly freezing early on the choice of a parametrization is in general a very
dangerous procedure -- as shown by many examples -- since it outlaws creativity
and learning. I have no problem if one uses a set of different frameworks --
like DELPHI used both the HQET and the kinetic scheme for their V(cb)
extraction -- and does not treat them uncritically like gospel. 

Sorry for that final pontificating. That is it for now. 
Greetings,  Ikaros  


----------------------------------------------------------------------
From: "Nikolai Uraltsev" 
To: bigi.1@nd.edu, ursl@physi.uni-heidelberg.de
Subject: distribution function parameters etc.
Date: Wed, 23 Jun 2004 17:08:43 +0400 (MSD)

Dear Urs,

  Let me try to answer your questions to the extent I can understand the
problem, and to the extent it is easy over e-mail.

  First of all, let me say that I address only pure theoretical aspects.
I am in no position even to express opinion on which kind of measurements
are better or more reliable from experimental viewpoint, I simply do not
know this. For instance, I cannot really say about including
"semi-exclusive measurements". If this is a reliable measurement of the
inclusive decay rates and moments, then fine; if it measures something 
different, then how to use it?

  Second preliminary consideration -- I feel appropriate to express the
theoretical perspective, since it may help to clarify the angle from
which I am making my points -- and this may be not exactly what you
actually ask.

 In general, deFazio/Neubert is a model which is not the state of the art,
but it is not particularly bad either (if I understand correctly what
they do) [provided you use \alpha_s=0.3 as I have a chance to state a
year ago]. Having said this, I'll not return anymore to the aspects
related to using something alternative. This is a model, and it
certainly misses some physics; significance really depends on
what concretely is done with it.

>From what you said, it seems that the model is fixed once you fixed the
two parameters, m_b and a. They are one-to-one related, even if
not directly, to (our) b quark mass and \mu_\pi^2. Therefore, it seems to
me that from the GENERAL perspective (let's abstract from the concrete
way it is done), the controversy you raised is of what is the better way
to determine these two parameters.

If this is a question, then (once again, neglecting "subtleties" which in
the actual analyses are 100% important and may change things to the
opposite), generally speaking they are more or less equal, or, saying
differently, should lead to the same result within the accuracy of the
model itself. (Not in practice -- but about this later). However, neither
is the best way. In particular, \mu_pi^2 is better and more reliably
determined from more inclusive B -> X_c semileptonic moments, as you know.

  So, this is related to one of the raised questions. If we need the 
values of the heavy quark parameters to know the two parameters in the 
model, we have the most precise information from the b->c distributions.  
  Here, however, a clarification is required. I do not see a problem with
the BaBar's (and CLEO's, underway) value for \mu_\pi^2; in particular
it appears just where theory expects it, and this certainly adds
confidence. My problem is rather the value of m_b. BaBar obtained a
rather accurate value for m_b as well, not only for m_b-0.74m_c, and this
is a kind of surprise for me, as I said in the talks and wrote in the
proceedings. I do not state that there is something wrong with this, but
I personally am not convinced that it did not come from literal fitting 
some not quite reliable details of the cut-dependence of hadronic 
moments, something we with Ikaros warned about last fall. Maybe I'm 
only an alarmist, but, I repeat, I am not sure the dedicate studies have 
been done to be certain nothing of this sort happened.

  Having in mind this cautious remark, let us see what we have. For the
rule of thumb in estimated accuracy, we assume that

  2* <==> m_b

and

  12*<(E-)^2> <==> \mu_\pi^2

Then, regarding \mu_\pi^2, the BaBar value itself is more accurate than 
what you get from b -> s+\gamma even if you forget about theory 
uncertainties associated with the latter.

In respect to m_b, if you accept the BaBar values, they are still a
little more precise than from b -> s+\gamma. However, I said it must be
taken with reasonable caution. Even then, however there are alternative
determinations of m_b which all point out to the same value within 40 MeV
(we should discard those, even published, which are theoretically
doubtful). From this alternative determinations m_b is known with the
uncertainty 60 MeV or so, whether or not experiments tend to take this
into consideration. I cannot help reminding that theory status of those
analyses is qualitatively higher than the scrutiny available for, say
b-> s+\gamma decays.
In fact, combining all this existed and new information on B decays, I
think one can state rather confidently that m_b must be about 4.60 GeV
with an accuracy not worse than 50-60 MeV. Higher values would too strong
contradict Upsilon sum rules, lower values will be difficult to
accommodate in radiative and b->c SL decays; the actual distributions
of probabilities are not Gaussian!

  So, this sets the stage for our discussion. We DO KNOW the values of
parameters in the model today with at least not a lower accuracy than
b->s + \gamma decays attempt to determine at present. If one keeps
reservations about theory involved in extraction of m_b and/or \mu_\pi^2
from other theory, or from SL experiments, why then a high trust is 
placed into Neubert-deFazio model which is simply a somewhat toy-level 
application of the same theory?

  Please understand that I'm not saying experiments are wrong adopting
their strategy relying only on what they measure in their own, or
similar, experiments, even if the motivation for this was inherited from
the theoretical papers and opinions which nowadays are regarded more as 
obsolete among theorists.  This is a legitimate procedure, it even has 
some advantages, and in what follows I assume it. Simply, a theorist 
cannot proceed further to the details without explaining the actual 
theoretical environment.


  Now more close to the questions themselves. So, we assume we rely on
the b -> s + \gamma spectra and do not consider m_b known a priori. Still 
some comments to see if I appreciate the situation correctly.

  I understand you need the concrete MC model and to this end have to fix
its parameters. However, let us look where in practice you use it
when all the analyses are done. You probably will rely on it in
estimating the rates in the kinematics where you place the cuts. (If it
is used to calibrate efficiencies inside the fiducial kinematic domain,
some of the arguments below do not apply). Then things depend on what are
these cuts.

If you discriminate against b->c using only the lower cut on charge
lepton energy, I doubt the model itself is accurate enough -- it is not
supposed to capture much at such a "high resolution" in the end-point
domain. The situation is even worse if only the lower cut on q^2 is 
considered -- the model is largely irrelevant at large q^2, being 
focussed on just the opposite physics. It would be relevant for studying 
M_X^2 distributions, in particular at small q^2.

>From our old studies of M_X^2 distributions I remember that the principal
question concerns the fraction of the decays in the high-mass tail which 
has to be cut off. I may expect that the same question will 
pass throughout all applications more or less regardless of the concrete 
combination of cuts imposed. 
With the functional shape of the distribution function fixed, this 
fraction depends significantly on m_b and \mu_\pi^2 (or a). The problem I 
want to be appreciated is that, however, it is really determined only as 
long as one sticks to a particular functional form. Therefore, even 
fitting accurately these two parameters, we really may not know how fast 
the tail falls off.
In principle (or as a matter of principle), this can be determined 
experimentally from the same b -> s+\gamma spectrum looking at the lower 
tail. However, this is just the domain where no good measurements, 
apparently is expected!
This problem is there whether you fit the shape of the distribution, or 
the moments, or both. Unless you measure the distribution in its lower 
tail (in terms of E_\gamma, or high tail in terms of hadronic mass), you 
do not know how fast it fades away, and neither fitting an ansatz not 
measuring moments changes this fact.

Therefore, I do not see how one can potentially get around this 
particular element of model dependence by only improving the analysis 
of measurements in the "traditional" kinematic regimes.

***

Now really down to the concrete questions in the original msg.

   o Use the B -> s gamma  contour (from CLEO). The contour is large and
   dominated by  statistical errors.  BELLE now has  a new measurement
   of the b-> s gamma photon energy spectrum.

   o Use  the moments  of  b  -> c  l  nu  decays  (and removing  terms
   proportional to alpha_s^2 and  1/m_b^3). This assumes that the HQET
   parameters are directly related to the shape function parameters.


Unfortunately, the questions sound too general or I do not understand 
what is precisely meant. The contour -- what it is for? For the deFazio 
parameters or for m_b and kinetic value? And, most important, how it has 
been obtained?

As I indicated above, in principle b  -> c  l\nu provide highly
sensitive constraints. Therefore, in general terms, I would certainly 
advocate the strategy where this is included. However, the problem is to 
use them properly. To be more precise, what is the point in using the 
constraints if they are obtained using wrong (ok, inaccurate) 
expressions, which were even directly observed not to describe 
experimental data?

Let me abstract from technicalities, which are also important, but 
are more separated from physics. As I have explained, b  -> c  l\nu data 
are probably not very sensitive to m_b itself even if \mu_\pi^2 is known. 
And what you want to know to the first place is just m_b! However, if you 
look at the used applications, with known -\lambda_1 the data seem to fix 
m_b quite precisely. Why? 
Mainly, I believe, because they use the mass relation between \bar M_B 
and \bar M_D. However, this is quite unreliable relation. There is 
absolutely no point in relating the parameters of the b->u or b->s 
transitions to the mass of the charm quark, non-local correlators or the 
assumptions of convergence of the 1/m_c expansion; the latter are dead 
end, these are never measured in experiment and will always remain just 
assumptions. 

Realizing this -- that, in fact the primary information on m_b in this 
approach is not extracted from the data but rather comes mainly from the 
poorly substantiated assumptions, one may start thinking if it is better 
not to include the b  -> c  l\nu data at all. Of course, as we know they 
do fix the kinetic expectation value -- however, the extractions are made 
in one overall fit. Hence if something wrong is done with m_b, what is 
obtained for \mu_\pi^2 becomes questionable as well.

So, my conclusion here illustrates the problem I mentioned in the 
beginning -- it is difficult to answer the questions correctly where they 
are not very precise. I think the b  -> c  l\nu moments should be used to 
constraint the parameters of the light-cone distribution. However, this 
should be done properly. And if this cannot be done properly, then it 
maybe safer to exclude them altogether.
  From the theoretical perspective this probably sounds weird. How to do 
extractions properly and how to identify parameters of the model with the 
well-defined heavy quark parameters is known and have been described. So, 
it is not clear why one should through away the piece of valid physical 
information -- whether it radically shrinks the intervals, or only 
slightly constraints the resulting domain. Yet, if this is not done 
properly, I would say it is better not to include this. Then one is left 
with only b -> s+\gamma.

**

Now, another group of question which I basically understand as: should 
one use fits of the shape of the distribution, or fit of the two (?) 
moments? 

Once again, it seems to me the answer here depends on what is actually 
done, and how. From some arguments above, it may follow (once again, in 
GENERAL) that this must be the same. However, probably not in practice. 

  Fitting the moments may possibly have convenience if the constraints 
from the SL moments are incorporated -- this probably would makes it 
more straightforward. However, there is a problem related to the lower 
cut on E_\gamma, the biases we discussed. If you use the right approach, 
you find they are quite significant numerically. So, the correct 
procedure has to incorporate them properly. And I suspect that this may 
not have been done in practice, whether or not b  -> c  l\nu constraints 
are considered.

  As far as I understand, the equations for the moment routinely rely 
on the usual expressions from Ligeti et al., and they are not 
sufficiently correct at the cuts we discuss. Then we are coming to the 
same point -- one better does not use relations at all if they are 
incorrect. One really needs to include here biases.

  On the other hand, fitting the shape of the distribution directly may be 
largely free from this particular problem (depending on what is done 
precisely). Indeed, you just convolute the perturbative spectrum with the 
nonperturbative distribution function to obtain the spectrum in the whole 
domain. Then you fit only over the domain where you measure the spectrum 
-- in this way the biases are more or less intrinsically included into 
the treatment. 

  So, fitting the shape of the distribution in this case seems to be more 
fool-proof. The disadvantage may be that you find yourself to be more 
tightly stuck with the particular ansatz, and in the long run it may 
become not easy to disentangle what has been really established and what 
is the artefact of the assumed parametrization. This story is quite 
familiar with the ACM ansatz for the b->c  SL decays where this model was 
used in the orthodox form much longer (in both the theoretical aspect, 
and in time frame) than it should have been. ACM is not the only example. 

  At the same time, there is nothing particularly difficult in suggesting 
the approach which, if not completely free from all these problems, at 
least eliminates all inconsistencies and all weak links which can be 
gotten rid of. I have a concern, though that once you have chosen a 
particular "approved" strategy and reached at least some theory consensus 
on the better option at a given time moment (say, summer 2004), 
experiments will be very reluctant to introduce any modifications in the 
future.  ACM and ISGW applications illustrated this point quite clear
-- once canonized, even good models may later turn into a kind of braking 
force.

********************

I appreciate that what I have discussed in this response, may sound too 
general or/and philosophical. Urs, I hope you know me to believe that 
I'm not trying to evade further discussions. If you are not satisfied 
with my considerations or did not get answers to the concrete questions 
you are concerned with, feel free to write again. I'll try to respond 
timely -- sorry, for technical reasons I could not do this way this time.

Kolya