%% slacpub7056: page file slacpub705600231.tcx.
%% subsubsection Renormalization [slacpub705600231 in slacpub7056002: ^slacpub70560023 >slacpub705600232]
%%%% latex2techexplorer block:
%% latex2techexplorer page setup:
\newmenu{slacpub7056::context::slacpub705600231}{
\docLink{slacpub7056.tcx}[::Top]{Top}%
\subsectionLink{slacpub7056002}{slacpub70560023}{Above: 2.3. Problems and Open Issues }%
\subsubsectionLink{slacpub7056002}{slacpub705600232}{Next: The Vacuum }%
}
%%%% end of latex2techexplorer block.
%%%% code added by add_nav perl script
\docLink{slacpub7056.tcx}[::Top]{Top of Paper}%

\docLink{pseudo:previousTopic}{Previous Section}%
\bigskip%
%%%% end of code added by add_nav
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%% author definitions added by nc_fix
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\def\a{\alpha}
\def\b{\beta}
\def\D{\Delta}
\def\G{\Gamma}
\def\e{\epsilon}
\def\g{\gamma}
\def\d{\delta}
\def\p{\phi}
\def\vp{\varphi}
\def\s{\sigma}
\def\l{\lambda}
\def\L{\Lambda}
\def\th{\theta}
\def\om{\omega}
\def\del{\partial}
\def\ha{\frac{1}{2}}
\def\psibar{\overline{\psi}}
\def\sla#1{#1\!\!\!/}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%% end of definitions added by nc_fix
\subsubsection{\usemenu{slacpub7056::context::slacpub705600231}{Renormalization }}\label{subsubsection::slacpub705600231}
All nontrivial quantum field theories are afflicted with
divergences, and LC theories are no exception to this. The theory
must first be made finite by introducing a regulator, and then the
dependence on the unphysical parameters that characterize the
regulator (generically called cutoffs) must be removed by suitably
chosen counterterms. These two problems are of course linked, as the
counterterms required in general depend strongly on the regulator
used. In particular, it is desirable for a regulator to respect as
many symmetries as possible, so that counterterms will be restricted
to invariant operators.
It is useful to distinguish three generic types of cutoff that are
necessary for LC field theory:\footnote{For actual calculations one
might use a more sophisticated cutoff scheme than that presented
here, for example the ``invariant mass'' cutoff \cite{38}, which
preserves the kinematic Lorentz symmetries on the LC, or the
similarity scheme \cite{39}. The present discussion is merely
intended to highlight the conceptual issues.}
\begin{itemize}
\item A cutoff on lightcone energies: ${p_\perp^2+m^2\over p^+}
<\Lambda$
\item For massless particles, a cutoff on small longitudinal
momenta: $p^+>\lambda$
\item A possible cutoff on particle number: $n<\nu$
\end{itemize}
All of these remove highenergy states on the LC, so that their
counterterms will be local in LC time.\footnote{That the third
cutoff removes highenergy states follows from the positivity of
longitudinal momenta: any state with a large number of particles
must contain some ``wee'' partons, which have high LC energies.
This is another significant difference between working on the LC and
at equal times, where states with many partons are not necessarily
highenergy states.}
There are two main difficulties that arise in the determination of
these counterterms. First, all known regulators that are
nonperturbative and applicable to Hamiltonians violate Lorentz and
gauge invariance. That this will generically be the case can be
seen by noting that some subset of the Lorentz generators are
dynamical, and thus mix states of different particle number. Any
truncation that limits particle number will in general violate these
symmetries. Gauge invariance will be broken for essentially the same
reason; in QED, for example, the Ward identity relates Green
functions that involve intermediate states with different particle
content. This means that the counterterms themselves must also
violate these symmetries, so that physical quantities computed from
the full Hamiltonian can be Lorentz and gaugeinvariant. This is a
major complication, as it drastically increases the number of
possible operators that can occur.
The other main complication follows from the structure of the
dispersion relation on the LC [Eq. (\docLink{slacpub7056002.tcx}[disprel]{8})]. Transverse UV
divergences ($p_\perp\rightarrow\infty$) can occur for any value of
$p^+$. This means that counterterms for these divergences in
general involve functions of ratios of all available longitudinal
momenta. An analogous result holds also for small$p^+$
divergencesthey can occur for any $p_\perp$, and so counterterms
for these will involve functions of transverse momenta. Thus there
are in general an infinite number of possible counterterms, even if
we restrict consideration to relevant or marginal operators (in the
sense of the renormalization group). These problems indicate that
renormalization in LC field theory is significantly more complicated
than in other formulations. It is here that the familiar ``Law of
Conservation of Difficulty'' asserts itself.
The simplest approach to renormalization is to just compute the
counterterms perturbatively. In analogy with improved lattice
actions, the idea is that asymptotic freedom should make this
sensible if the cutoff is sufficiently high. This is potentially
correct for states that are removed by the cutoff $\Lambda$; the
perturbative beta function that controls the dependence of the
effective coupling on $\Lambda$ is negative for QCD. A perturbative
treatment is probably {\em not} sufficient for removing dependence
on $\l,$ however, except perhaps in certain limited domains. One
sign that the infrared cutoff is fundamentally different from
$\Lambda$ is that longitudinal momentum rescalings are a Lorentz
boost, and so must be an exact symmetry of the theory. There can be
no beta function associated with longitudinal scale transformations,
unlike rescalings in the transverse directions. A more physical
point is that all of the vacuum structure is removed by the cutoff
$\l$. It is very unlikely that the physics associated with the QCD
vacuum can be recreated in the form of counterterms using only
perturbation theory.
For problems where the structure of the vacuum does not play a
central role, however, such a perturbative treatment might be quite
useful. For example, in the study of heavy quarkonia one presumably
does not have to have complete control over the vacuum (e.g., a
nearly massless pion and linear longrange potentials) in order to
obtain reasonable results.
A more ambitious approach to the renormalization problem makes use
of Wilson's formulation of the renormalization group (RG)
\cite{40,11}. Here one studies sequences of Hamiltonians that
are obtained by iterating a RG transformation which lowers the
various cutoffs. The idea is to search for those trajectories of
Hamiltonians that can be infinitely long. A Hamiltonian that lies
on such a trajectory will give results that are equivalent to a
Hamiltonian with infinite cutoffs, that is, results that are
cutoffindependent. With a perturbative implementation of the RG
this method is equivalent to the first. It is clearly of interest
to develop nonperturbative realizations of the RG for use in LC
field theories.
%%%% code added by add_nav perl script
\bigskip%
\docLink{pseudo:nextTopic}{Next Section}%

\docLink{slacpub70560010u1.tcx}[::Bottom]{Bottom of Paper}%
%%%% end of code added by add_nav